Linux安装Oracle Database 19c RAC

Linux安装Oracle Database 19c RAC,第1张

Linux安装Oracle Database 19c RAC Linux安装Oracle Database 19c RAC

Oracle Real Application Clusters(RAC)允许客户跨多台服务器运行单个Oracle数据库,以便在访问共享存储的同时最大限度地提高可用性并实现水平可扩展性。连接到Oracle RAC实例的用户会话可以在停机期间进行故障切换并安全地更新更改,而不会对最终用户应用程序进行任何变更,从而对最终用户隐藏停机的影响。

rack安装参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/rilin/index.html

rac管理员参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/racad/index.html

grid安装参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/index.html

准备软件包

安装环境所需的软件:

  • LINUX.X64_193000_grid_home.zip
  • LINUX.X64_193000_db_home.zip

软件包下载地址:https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

安装环境准备

  • *** 作系统:CentOS 7.8 minimal
  • CPU/内存: 2C8G
  • 服务器数量:2台

备注:无特殊说明后续 *** 作在所有节点执行。

节点网络规划

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/about-oracle-grid-infrastructure-network-configuration-options.html

准备2台服务器部署RAC集群,每个节点至少需要2块网卡:

主机名ens33-public-ipens37-private-ipvipscan ipnode1192.168.93.20192.168.13.20192.168.93.30192.168.93.40
192.168.93.41node2192.168.93.21192.168.13.21192.168.93.31192.168.93.42

网卡及IP规划说明:

  • 由于使用vmware workstation,ens33网卡使用NAT模式,ens37网卡使用Hostonly模式。
  • 公共网络接口ens33,用户和应用程序服务器在该接口上连接以访问数据库服务器上的数据;
  • 专用网络接口ens37,与公网IP不在同一个网段,用于节点间通信;
  • 每个节点都有一个VIP,与公网IP在同一个网段,VIP会绑定到对应节点的public网卡上;
  • SCAN VIP配置1~3个,与公网IP在同一个网段,为了防止单点故障推荐3个,scan ip会分配到不同节点的public网卡上;
节点磁盘组规划

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/identifying-storage-requirements-for-oracle-automatic-storage-management.html

规划3个ASM磁盘组,每个磁盘组包含一至多块磁盘,其中/dev/sda为系统盘,ASM磁盘规划如下:

磁盘组名称磁盘名称磁盘大小ASM磁盘名称作用冗余OCR/dev/sdb10Gasm-ocr1OCR/Voting FileEXTERNALDATA/dev/sdc40Gasm-data1Data FilesEXTERNALFRA/dev/sdd20Gasm-fra1Fast Recovery areaEXTERNAL

每个磁盘组可以包含多块磁盘提供冗余,需要的最少磁盘数由不同冗余模式决定:

  • External级别,对于每个extend(asm的最小数据单元)只存储一份,没有做冗余,最少需要一块磁盘(无需failure group),这种情况一般是依靠硬件存储级别做RAID实现;
  • Normal级别,每个extend只有一份镜像冗余,asm使用2份镜像存储,最少需要两块磁盘(两个failure group),有效磁盘空间是所有磁盘设备大小之和的1/2;
  • High冗余级别,每个extend有两份镜像冗余。asm使用3份镜像存储,最少需要三块磁盘(三个failure group),有效磁盘空间是所有磁盘设备大小之和的1/3;
  • Flex redundancy:一个Flex冗余度的ASM磁盘组可以被设置为任意保护模式(3副本、2副本、无保护),默认情况下,Flex冗余度的磁盘组使用2副本模式。

最终虚拟机配置如下:

创建共享磁盘

使用vmware workstation工具创建共享磁盘,以管理员身份运行powershell,执行以下命令:

cd "C:Program Files (x86)VMwareVMware Workstation"

./vmware-vdiskmanager.exe -c -s 10GB -t 4 sharedisk01.vmdk
./vmware-vdiskmanager.exe -c -s 40GB -t 4 sharedisk02.vmdk
./vmware-vdiskmanager.exe -c -s 20GB -t 4 sharedisk03.vmdk

各个虚拟机节点使用相同的方法添加共享磁盘,选择编辑虚拟机->添加磁盘->使用现有虚拟磁盘->选择现有磁盘->保持现有格式。

磁盘默认路径:

C:Program Files (x86)VMwareVMware Workstation

根据上述步骤完成其他磁盘的添加,启动虚拟机前需要编辑虚拟机配置文件(比如:CentOS78-93.20.vmx):

文件中末尾加入以下内容后,才可以正常打开虚拟机:

disk.locking = "false"
scsi1.shareBus = "VIRTUAL"
disk.EnableUUID = "TRUE"

查看磁盘信息

[root@localhost ~]# lsscsi
[0:0:0:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sda 
[0:0:1:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdb 
[0:0:2:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdc 
[0:0:3:0]    disk    VMware,  VMware Virtual S 1.0   /dev/sdd
配置/etc/hosts文件

配置主机名

# node1
hostnamectl set-hostname node1

# node2
hostnamectl set-hostname node2

编辑/etc/hosts文件,增加以下内容

cat >/etc/hosts<< EOF
# Public
192.168.93.20 node1 node1.racdb.local
192.168.93.21 node2 node2.racdb.local

# Private
192.168.13.10 node1-priv node1-priv.racdb.local
192.168.13.11 node2-priv node2-priv.racdb.local

# Virtual
192.168.93.30 node1-vip node1-vip.racdb.local
192.168.93.31 node2-vip node2-vip.racdb.local

# SCAN
192.168.93.40 node-cluster-scan node-cluster-scan.racdb.local
192.168.93.41 node-cluster-scan node-cluster-scan.racdb.local
192.168.93.42 node-cluster-scan node-cluster-scan.racdb.local
EOF

为第二块网卡配置静态IP地址

# node1节点
nmcli con mod "Wired connection 1" 
  ipv4.method manual 
  con-name ens37 
  ipv4.addresses 192.168.13.10/24 
  connection.autoconnect yes

nmcli device reapply ens37

# node2节点
nmcli con mod "Wired connection 1" 
  ipv4.method manual 
  con-name ens37 
  ipv4.addresses 192.168.13.11/24 
  connection.autoconnect yes

nmcli device reapply ens37

查看节点IP地址信息

[root@node1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b6:45:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.20/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::f7e2:c660:346d:b6d5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b6:45:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.13.10/24 brd 192.168.13.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet6 fe80::71e:6f8e:388b:9bec/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
配置时间同步

使用chrony从公网同步时间

yum install -y chrony
systemctl enable --now chronyd
配置selinux及防火墙

关闭selinux

sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/sysconfig/selinux
setenforce permissive

关闭firewalld防火墙

systemctl disable --now firewalld
安装依赖包

参考:

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/supported-red-hat-enterprise-linux-7-distributions-for-x86-64.html

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/installation-requirements-for-programming-environments-for-linux-x86-64.html

Oracle Database 19c 需要一些软件包,在继续之前使用yum命令安装这些包。

yum install -y 
  bc 
  binutils 
  compat-libcap1 
  compat-libstdc++-33 
  elfutils-libelf 
  elfutils-libelf-devel 
  fontconfig-devel 
  glibc 
  glibc-devel 
  ksh 
  libaio 
  libaio-devel 
  libX11 
  libXau 
  libXi 
  libXtst 
  libXrender 
  libXrender-devel 
  libgcc 
  libstdc++ 
  libstdc++-devel 
  libxcb 
  make 
  smartmontools 
  sysstat 
  net-tools 
  unzip 
  nfs-utils 
  gcc 
  gcc-c++
创建相关用户组

创建用户及组

# 创建用户组
groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
groupadd -g 54330 racdba

# 创建用户并加入组
useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle 
useradd -u 54331 -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba grid

# 设置用户密码
echo "oracle" | passwd oracle --stdin
echo "grid" | passwd grid --stdin
创建相应目录

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/about-creating-oracle-base-oracle-home-directories.html

mkdir -p /u01/app/19.3.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle/product/19.3.0/dbhome_1

chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
使能shmem

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/server-configuration-checklist-for-oracle-grid-infrastructure.html

所有节点分别执行(预检查要求):

cat >>/etc/fstab< 
配置NOZEROCONF 

所有节点分别执行(预检查要求):

cat >>/etc/sysconfig/network< 
登录配置 
cat >>/etc/pam.d/login< 
配置内核参数 

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/changing-kernel-parameter-values.html

# 配置以下内核参数
cat >/etc/sysctl.d/97-oracledatabase-sysctl.conf< 
为用户设置安全限制 

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/checking-resource-limits-for-oracle-software-installation-users.html

为oracle及grid用户配置安全限制

cat >/etc/security/limits.d/30-oracle.conf<>/etc/security/limits.d/20-nproc.conf< 
修改用户profile 

注意修改ORACLE_HOSTNAME及ORACLE_SID变量,node1节点与node2节点不同。

其中grid用户配置,节点1的ORACLE_SID=+ASM1,节点2的ORACLE_SID=+ASM1。

node1节点配置

# grid用户
cat>>/home/grid/.bash_profile<<'EOF'
# oracle grid
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1.racdb.local
export ORACLE_base=/u01/app/grid
export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
EOF

# oracle用户
cat>>/home/oracle/.bash_profile<<'EOF'
# oracle
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1.racdb.local
export ORACLE_UNQNAME=racdb
export ORACLE_base=/u01/app/oracle
export ORACLE_HOME=$ORACLE_base/product/19.3.0/dbhome_1
export ORACLE_SID=racdb1
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
EOF

node2节点配置

# grid用户
cat>>/home/grid/.bash_profile<<'EOF'
# oracle grid
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node2.racdb.local
export ORACLE_base=/u01/app/grid
export ORACLE_HOME=/u01/app/19.3.0/grid
export ORACLE_SID=+ASM2
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
EOF

# oracle用户
cat>>/home/oracle/.bash_profile<<'EOF'
# oracle
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node2.racdb.local
export ORACLE_UNQNAME=racdb
export ORACLE_base=/u01/app/oracle
export ORACLE_HOME=$ORACLE_base/product/19.3.0/dbhome_1
export ORACLE_SID=racdb2
export ORACLE_TERM=xterm
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
EOF
节点免密配置

节点node1配置:

su - grid
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-copy-id node2
 
su - oracle
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-copy-id node2

节点node2配置:

su - grid
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-copy-id node1
 
su - oracle
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
ssh-copy-id node1
关闭Transparent HugePages

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/disabling-transparent-hugepages.html

使用systemd管理hugepage

# 创建systemd文件
cat > /etc/systemd/system/disable-thp.service < /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target
EOF

# 启动服务
systemctl enable --now disable-thp
使用oracleasm配置共享存储

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/configuring-disk-devices-to-use-oracle-asmlib.html

下载RHEL7的oracleasmlib以及oracleasm-support,所有节点都要安装跟配置。

下载地址:https://www.oracle.com/linux/downloads/linux-asmlib-rhel7-downloads.html

yum install -y kmod-oracleasm

wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el7.x86_64.rpm
wget https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracleasm-support-2.1.11-2.el7.x86_64.rpm
yum -y localinstall oracleasmlib-2.0.12-1.el7.x86_64.rpm
yum -y localinstall oracleasm-support-2.1.11-2.el7.x86_64.rpm

#初始化
oracleasm init

#修改配置
oracleasm configure -e -u grid -g asmadmin

查看配置

[root@node1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_SCAN_DIRECTORIES=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

确定之前创建的共享磁盘,并且进行分区:

parted /dev/sdb -s -- mklabel gpt mkpart primary 1 -1
parted /dev/sdc -s -- mklabel gpt mkpart primary 1 -1
parted /dev/sdd -s -- mklabel gpt mkpart primary 1 -1

确认分区情况

[root@node1 ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   70G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   69G  0 part 
  ├─centos-root 253:0    0 60.1G  0 lvm  /
  ├─centos-swap 253:1    0  3.9G  0 lvm  [SWAP]
  └─centos-home 253:2    0    5G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
└─sdb1            8:17   0   10G  0 part 
sdc               8:32   0   40G  0 disk 
└─sdc1            8:33   0   40G  0 part 
sdd               8:48   0   20G  0 disk 
└─sdd1            8:49   0   20G  0 part

仅在node1节点执行,使用oracleasm创建磁盘,根据你的实际盘符名:

oracleasm createdisk OCR1 /dev/sdb1
oracleasm createdisk DATA1 /dev/sdc1
oracleasm createdisk FRA1 /dev/sdd1

在所有节点执行

oracleasm scandisks
oracleasm listdisks

执行结果

[root@node1 ~]# oracleasm listdisks
DATA1
FRA1
OCR1

查看磁盘设备

[root@node1 ~]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 grid asmadmin 8, 33 Oct 18 22:21 DATA1
brw-rw---- 1 grid asmadmin 8, 49 Oct 18 22:21 FRA1
brw-rw---- 1 grid asmadmin 8, 17 Oct 18 22:21 OCR1
开始安装GRID

在第一个节点node1执行。

使用ssh登陆到grid用户,将下载好的安装包LINUX.X64_193000_grid_home.zip上传到$GRID_HOME目录。

解压到$ORACLE_HOME目录下

[grid@node1 ~]$ unzip LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME

将cvuqdisk rpm包复制到集群上的每个节点

参考:https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/installing-the-cvuqdisk-rpm-for-linux.html

scp $ORACLE_HOME/cv/rpm/cvuqdisk-1.0.10-1.rpm root@node2:/tmp

切换回root用户安装cvuqdisk rpm包。

# node1
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -iv /u01/app/19.3.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm

# node2
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -iv /tmp/cvuqdisk-1.0.10-1.rpm

由于使用最小安装的 *** 作系统,无图形界面,node1节点安装xorg-x11,并在windows中安装xming以调用GUI界面:

yum install -y xorg-x11-xinit

# 重新登录会话生效
exit

SecureCRT配置勾选X11转发(xshell等其他ssh工具配置类似):

windows中下载安装Xming Server,直接启动即可,SecureCRT将转发图形界面到Xming Server显示。

node1节点以grid用户身份登陆,转到ORACLE_HOME目录

[grid@racdb1:/home/grid]$ cd $ORACLE_HOME

在node1命令行界面执行以下命令开始安装grid

./gridSetup.sh

需要注意,如果你是远程登陆,必须直接以grid用户登录,才可以远程调用图形化窗口,不可以使用root切换到grid。

集群配置

SCAN NAME使用/etc/hosts文件中的SCAN NAME

添加node2节点

查看添加的节点

配置网络接口

配置存储选项

配置GIMR

选择Change Discovery Path

路径配置为:

/dev/oracleasm/disks/

配置密码Oracle#123

配置Failure Isolation

配置management options

配置系统组

配置安装选项

配置Inventory

配置root脚本

安装前检查

查看配置信息

自动执行脚本

安装完成,失败项可以忽略

获取特定资源的状态和配置信息

[grid@node1 grid]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.chad
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.net1.network
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.ons
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        onLINE  onLINE       node2                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        onLINE  onLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        onLINE  onLINE       node1                    STABLE
ora.OCR.dg(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        onLINE  onLINE       node1                    STABLE
ora.node1.vip
      1        onLINE  onLINE       node1                    STABLE
ora.node2.vip
      1        onLINE  onLINE       node2                    STABLE
ora.qosmserver
      1        onLINE  onLINE       node1                    STABLE
ora.scan1.vip
      1        onLINE  onLINE       node2                    STABLE
ora.scan2.vip
      1        onLINE  onLINE       node1                    STABLE
ora.scan3.vip
      1        onLINE  onLINE       node1                    STABLE
--------------------------------------------------------------------------------

检查本地服务器上的 Oracle High Availability Services 和 Oracle Clusterware 堆栈的状态

[grid@node1 grid]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

查看node1 IP信息

[root@node1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b6:45:44 brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.20/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.93.30/24 brd 192.168.93.255 scope global secondary ens33:1
       valid_lft forever preferred_lft forever
    inet 192.168.93.41/24 brd 192.168.93.255 scope global secondary ens33:3
       valid_lft forever preferred_lft forever
    inet 192.168.93.42/24 brd 192.168.93.255 scope global secondary ens33:4
       valid_lft forever preferred_lft forever
    inet6 fe80::f7e2:c660:346d:b6d5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:b6:45:4e brd ff:ff:ff:ff:ff:ff
    inet 192.168.13.10/24 brd 192.168.13.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet 169.254.17.241/19 brd 169.254.31.255 scope global ens37:1
       valid_lft forever preferred_lft forever
    inet6 fe80::71e:6f8e:388b:9bec/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

查看node2 IP信息

[root@node2 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ab:d1:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.93.21/24 brd 192.168.93.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.93.40/24 brd 192.168.93.255 scope global secondary ens33:1
       valid_lft forever preferred_lft forever
    inet 192.168.93.31/24 brd 192.168.93.255 scope global secondary ens33:2
       valid_lft forever preferred_lft forever
    inet6 fe80::f7e2:c660:346d:b6d5/64 scope link tentative noprefixroute dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::30f5:d4f6:f0f0:8564/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens37:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ab:d1:77 brd ff:ff:ff:ff:ff:ff
    inet 192.168.13.11/24 brd 192.168.13.255 scope global noprefixroute ens37
       valid_lft forever preferred_lft forever
    inet 169.254.25.6/19 brd 169.254.31.255 scope global ens37:1
       valid_lft forever preferred_lft forever
    inet6 fe80::153a:c28f:f182:d96/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
创建用于DB的磁盘组

使用GRID用户,运行asmca:

[grid@node1 grid]$ asmca

新建磁盘组

查看磁盘组

查询磁盘组挂载状态以及CRSD状态:

[grid@node1 ~]$ sqlplus / as sysasm


SQL> select NAME,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
OCR                            MOUNTED
DATA                           MOUNTED
FRA                            MOUNTED
开始安装ORACLE

SSH登陆到oracle用户,将下载的 zip 文件解压到ORACLE_HOME目录。

[oracle@node1 ~]$ unzip LINUX.X64_193000_db_home.zip -d $ORACLE_HOME

转到ORACLE_HOME目录

cd $ORACLE_HOME

然后运行runInstaller

./runInstaller

开始安装

配置SSH connectivity,点击Setup

选择数据库版本

选择安装位置

配置用户组

自动执行root脚本

安装预检查

全局设置信息

安装完成

开始安装数据库

ssh连接到oracle用户,验证 DBCA 的要求

/u01/app/19.3.0/grid/bin/cluvfy stage -pre dbcfg -fixup -n node1,node2 
  -d /u01/app/oracle/product/19.3.0/dbhome_1 -verbose

运行dbca:

dbca

创建数据库

配置模式

部署类型

节点选择

配置数据库认证

配置存储

这里开启FRA跟归档,使用FRA作为路径:

数据库选项

配置选项

管理选项

用户凭证Oracle#123

创建选项

安装检查,勾选忽略检查失败部分

全局配置信息


开始安装

安装完成

查看状态

[grid@node1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.chad
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.net1.network
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
ora.ons
               onLINE  onLINE       node1                    STABLE
               onLINE  onLINE       node2                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        onLINE  OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        onLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        onLINE  onLINE       node2                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        onLINE  onLINE       node1                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        onLINE  onLINE       node1                    STABLE
ora.OCR.dg(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        onLINE  onLINE       node1                    STABLE
      2        onLINE  onLINE       node2                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        onLINE  onLINE       node1                    STABLE
ora.node1.vip
      1        onLINE  onLINE       node1                    STABLE
ora.node2.vip
      1        onLINE  onLINE       node2                    STABLE
ora.qosmserver
      1        onLINE  onLINE       node1                    STABLE
ora.racdb.db
      1        onLINE  onLINE       node1                    Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /dbhome_1,STABLE
      2        onLINE  onLINE       node2                    Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /dbhome_1,STABLE
ora.scan1.vip
      1        onLINE  onLINE       node2                    STABLE
ora.scan2.vip
      1        onLINE  onLINE       node1                    STABLE
ora.scan3.vip
      1        onLINE  onLINE       node1                    STABLE
--------------------------------------------------------------------------------

验证数据库状态:

[oracle@node1 ~]$ srvctl status database -d racdb
Instance racdb1 is running on node node1
Instance racdb2 is running on node node2

查看数据库配置

[oracle@node1 ~]$ srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/19.3.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/RACDB/PARAMETERFILE/spfile.272.1086358693
Password file: +DATA/RACDB/PASSWORD/pwdracdb.256.1086353867
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA,FRA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oper
Database instances: racdb1,racdb2
Configured nodes: node1,node2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services: 
Database is administrator managed

连接数据库查看

[oracle@node1 ~]$ sqlplus / as sysdba

SQL> select instance_name,status from gv$Instance;

INSTANCE_NAME    STATUS
---------------- ------------
racdb2           OPEN
racdb1           OPEN

参考:
https://www.cnblogs.com/ryanw/articles/12540153.html
https://blog.csdn.net/huang987246510/article/details/116291633
https://dbtut.com/index.php/2020/11/18/how-to-install-oracle-rac-19c-on-linux/
https://www.bigdba.com/oracle/831/how-to-install-oracle-19c-two-node-real-application-clusterrac-on-the-google-cloud-platform/

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/3998847.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-10-22
下一篇 2022-10-22

发表评论

登录后才能评论

评论列表(0条)

保存