配置项目 | 数据库1 | 数据库2 |
---|---|---|
主机名 | rac1 | rac2 |
操作系统 | Red Hat Enterprise Linux Server release 7.2 (Maipo) | Red Hat Enterprise Linux Server release 7.2 (Maipo) |
内核 | 3.10.0-327.el7.x86_64 | 3.10.0-327.el7.x86_64 |
public网卡 | enp0s9 | enp0s9 |
public ip | 192.168.56.101 | 192.168.56.102 |
private网卡 | enp0s8 | enp0s8 |
private ip | 10.0.0.7 | 10.0.0.0.8 |
virtual ip | 192.168.56.103 | 192.168.56.104 |
scan name | definescan | |
scan ip | 192.168.56.105 |
如无特殊说明,以下操作需在所有节点上进行配置。
rac1(root)
# hostnamectl set-hostname rac1# su# hostnamerac1
rac2(root)
# hostnamectl set-hostname rac2# su# hostnamerac2
rac1(root),rac2(root)
# public IP192.168.56.101 rac1192.168.56.102 rac2# virtual IP192.168.56.103 rac1-vip192.168.56.104 rac2-vip# rac scan IP192.168.56.106 definescan# private IP10.0.0.7 rac1-priv10.0.0.8 rac2-priv
!> 所有的网卡都需要设置静态ip
nmcli con show
查看所有的网卡[root@rac1 /]# nmcli con showNAME UUID TYPE DEVICE enp0s8 e849869a-de19-4824-bafd-4491e66e8ca4 802-3-ethernet enp0s8 enp0s3 86db33b5-ea89-47aa-a038-98f6029fa608 802-3-ethernet enp0s3 enp0s9 706ffc32-e82c-4a01-8b8f-eefbf92950ff 802-3-ethernet -- virbr0-nic 1ac00d88-3f52-4dad-8da7-006b9073469f 802-3-ethernet virbr0-nic virbr0 00facfd9-5460-4846-8e94-1a12de673348 bridge virbr0
然后到路径/etc/sysconfig//etc/sysconfig/network-scripts
下根据网卡名称找到网卡到配置文件,一般名称命名格式为ifcfg-NAME
rac1(root)
ifcfg-enp0s9(public ip)
TYPE=EthernetBOOTPROTO=noneDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPADDR=192.168.56.101IPV4_FAILURE_FATAL=noNAME=enp0s9UUID=706ffc32-e82c-4a01-8b8f-eefbf92950ffDEVICE=enp0s9ONBOOT=yes
ifcfg-enp0s8(private ip)
TYPE=EthernetBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noIPADDR=10.0.0.7NAME=enp0s8UUID=e849869a-de19-4824-bafd-4491e66e8ca4DEVICE=enp0s8ONBOOT=yesPEERDNS=yesPEERROUTES=yes
rac2(root)
ifcfg-enp0s9(public ip)
HWADDR=08:00:27:26:72:E5TYPE=EthernetBOOTPROTO=noneDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPADDR=192.168.56.102IPV4_FAILURE_FATAL=noNAME=enp0s9UUID=bc89e1c6-2457-41ce-a366-5a505c5d1cd3ONBOOT=yes
ifcfg-enp0s8(private ip)
HWADDR=08:00:27:F9:1B:62TYPE=EthernetIPADDR=10.0.0.8BOOTPROTO=noneDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=noNAME=enp0s8UUID=0d68de3e-74ab-4e0d-af99-12a68f5f7525ONBOOT=yes
在两个节点上通过ping验证两个节点是否可通
ping rac1ping rac2ping rac1-privping rac2-priv
如果在network-scripts
目录下找不到配置文件,可自行创建一个,网卡的UUID可通过命令nmcli con show
查看
# systemctl status firewalld.service# systemctl stop firewalld.service# systemctl disable firewalld.service
修改文件/etc/selinux/config
,设置
SELINUX=disabled
关闭selinux
# setenforce 0# getenforce
rac安装过程需要依赖较多的包,这些包都包含在系统镜像中,可通过将系统镜像挂载到系统中设置本地yum源来进行安装。每个虚拟环境挂载镜像的方式不同,步骤并不复杂,这里以VirtualBox为例。
/dev/sr0
,可以通过mount
命令挂载到/mnt
下# mount /dev/sr0 /mnt# cd /mnt# lltotal 872dr-xr-xr-x. 4 root root 2048 Oct 30 2015 addonsdr-xr-xr-x. 3 root root 2048 Oct 30 2015 EFI-r--r--r--. 1 root root 8266 Apr 4 2014 EULA-r--r--r--. 1 root root 18092 Mar 6 2012 GPLdr-xr-xr-x. 3 root root 2048 Oct 30 2015 imagesdr-xr-xr-x. 2 root root 2048 Oct 30 2015 isolinuxdr-xr-xr-x. 2 root root 2048 Oct 30 2015 LiveOS-r--r--r--. 1 root root 114 Oct 30 2015 media.repodr-xr-xr-x. 2 root root 835584 Oct 30 2015 Packagesdr-xr-xr-x. 24 root root 6144 Oct 30 2015 release-notesdr-xr-xr-x. 2 root root 4096 Oct 30 2015 repodata-r--r--r--. 1 root root 3375 Oct 23 2015 RPM-GPG-KEY-redhat-beta-r--r--r--. 1 root root 3211 Oct 23 2015 RPM-GPG-KEY-redhat-release-r--r--r--. 1 root root 1568 Oct 30 2015 TRANS.TBL
# cd /etc/yum.repos.d# cat <<EOF > redhat7.2iso.repo[rhel7]name = Red Hat Enterprise Linux 7.2baseurl=file:///mnt/gpgcheck=0enabled=1EOF# yum clean all# yum grouplist# yum makecache
能正常输出说明配置成功
Red Hat默认的yum仓库需要注册用户才能使用,如果你的系统未注册,使用yum时会报以下错误
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-managerThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
解决办法就是删掉自带的仓库,只要删除文件/etc/yum.repos.d/redhat.repo
即可。
本次rac安装是通过GUI界面进行安装,因此需要事先安装VNC,通过VNC进入系统进行数据库安装。
安装vnc之前先确保已经完成以上挂载镜像设置本地yum源相关步骤。
# yum install tigervnc-server
编辑文件/lib/systemd/system/vncserver@.service
,将里面<USER>
替换成登录用户,这里直接用root登录。
[Unit]Description=Remote desktop service (VNC)After=syslog.target network.target[Service]Type=forking# Clean any existing files in /tmp/.X11-unix environmentExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i"PIDFile=/root/.vnc/%H%i.pidExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'[Install]WantedBy=multi-user.target
修改完执行以下命令重新加载
# systemctl daemon-reload
# vncserver
首次启动需要输入密码,启动后,默认端口号是5901,也可以通过命令查看端口号
# netstat -npl|grep vnctcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 7048/Xvnc tcp6 0 0 :::5901 :::* LISTEN 7048/Xvnc
后续所有的GUI操作都通过vnc客户端进行操作。
groupadd -g 1204 oinstallgroupadd -g 1200 dbagroupadd -g 1203 asmadmingroupadd -g 1201 asmdbagroupadd -g 1202 asmoper
useradd -u 1100 -g oinstall -G dba,asmdba,asmadmin -d /home/oracle oracleuseradd -u 1200 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid gridpasswd oraclepasswd grid
# id nobody
检查是否存在,若无则手动创建,且保证两边的ID一致
cat 1>> /etc/sysctl.conf <<EOFfs.file-max = 6815744kernel.sem = 250 32000 100 128kernel.shmmni = 4096kernel.shmall = 1073741824kernel.shmmax = 858993459200net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576fs.aio-max-nr = 1048576net.ipv4.ip_local_port_range = 9000 65500EOF
执行以下命令生效
sysctl -p
limits.conf
cat 1>>/etc/security/limits.conf <<EOFgrid soft nofile 1024grid soft nofile 1024grid hard nofile 65536grid soft nproc 4096grid hard nproc 16384grid soft stack 10240grid hard stack 32768oracle soft nofile 1024oracle hard nofile 65536oracle soft nproc 4096oracle hard nproc 16384oracle soft stack 10240oracle hard stack 32768grid soft memlock -1grid hard memlock -1oracle soft memlock -1oracle hard memlock -1EOF
/etc/pam.d/login
cat 1>>/etc/pam.d/login <<EOFsession required pam_limits.soEOF
/etc/profile
cat 1>>/etc/profile <<EOFif [ \$USER = "oracle" ] || [ \$USER = "grid" ]; thenif [ \$SHELL = "/bin/ksh" ]; thenulimit -p 16384ulimit -n 65536elseulimit -u 16384 -n 65536fiumask 022fiEOF
grid用户环境变量设置
在.bash_profile其中添加以下内容,注意是以RAC节点1为例,在节点2上要写ORACLE_SID=+ASM2,节点3上要写ORACLE_SID=+ASM3。
rac1
# su - grid# vi ~/.bash_profileumask 022export ORACLE_SID=+ASM1export ORACLE_HOME=/u01/app/11.2.0/gridexport ORACLE_BASE=/u01/app/oracleexport PATH=/u01/app/11.2.0/grid/bin:$PATH# source ~/.bash_profile
rac2
# su - grid# vi ~/.bash_profileumask 022export ORACLE_SID=+ASM2export ORACLE_HOME=/u01/app/11.2.0/gridexport ORACLE_BASE=/u01/app/oracleexport PATH=/u01/app/11.2.0/grid/bin:$PATH# source ~/.bash_profile
oracle用户环境变量设置
在.bash_profile其中添加以下内容,注意是以RAC节点1为例,在节点2上要写ORACLE_SID=db2,在节点3上要写ORACLE_SID=db3
rac1
# su - oracle# vi ~/.bash_profileumask 022export ORACLE_SID=db1export ORACLE_BASE=/u01/app/oracledbexport ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1export PATH=$ORACLE_HOME/bin:$PATH# source ~/.bash_profile
rac2
# su - oracle# vi ~/.bash_profileumask 022export ORACLE_SID=db2export ORACLE_BASE=/u01/app/oracledbexport ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1export PATH=$ORACLE_HOME/bin:$PATH# source ~/.bash_profile
用root用户在所有节点上执行以下命令创建文件夹。
su - rootmkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/oracle/product/11.2.0/db_1mkdir -p /u01/softmkdir -p /u01/app/oracledbmkdir -p /u01/app/oracledb/product/11.2.0/db_1chown -R grid:oinstall /u01/app/oraclechown -R grid:oinstall /u01chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1chown -R oracle:oinstall /u01/app/oracledbchown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1chown -R grid:oinstall /u01/app/11.2.0/gridchmod -R 775 /u01/
# chmod +x /etc/rc.d/rc.local# cat >>/etc/rc.local <<EOFif test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabledfiEOF# cat >/etc/default/grub <<EOFGRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=savedGRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet transparent_hugepage=never"GRUB_DISABLE_RECOVERY="true"EOF# grub2-mkconfig -o /boot/grub2/grub.cfg
# grep SwapTotal /proc/meminfoSwapTotal: 2723836 kB# mkdir -p /usr/swap# dd if=/dev/zero of=/usr/swap/swapfile bs=1G count=2# mkswap /usr/swap/swapfile# swapon /usr/swap/swapfile# grep SwapTotal /proc/meminfoSwapTotal: 4820984 kB
设置开机启动挂载,编辑/etc/fstab
在文件最后增加一行
/usr/swap/swapfile swap swap defaults 0 0
修改文件/etc/systemd/logind.conf
设置RemoveIPC值为no
RemoveIPC=no
重新加载
systemctl daemon-reloadsystemctl restart systemd-logind
修改文件/etc/sysconfig/network
NOZEROCONF=yes
执行以下命令,安装依赖包,如果报错请忽略
yum clean allyum install -y binutils*yum install -y compat-libcap1*yum install -y compat-libstdc++*yum install -y compat-libstdc++*686*yum install -y e2fsprogs*yum install -y e2fsprogs-libs*yum install -y glibc*686*yum install -y glibc*yum install -y glibc-devel*yum install -y glibc-devel*686*yum install -y ksh*yum install -y libgcc*686*yum install -y libgcc*yum install -y libs*yum install -y libstdc++*yum install -y libstdc++*686*yum install -y libstdc++-devel*yum install -y libstdc++*686*yum install -y libaio*yum install -y libaio*686*yum install -y libaio-devel*yum install -y libaio-devel*686*yum install -y libXtst*yum install -y libXtst*686*yum install -y libX11*686*yum install -y libX11*yum install -y libXau*686*yum install -y libXau*yum install -y libxcb*686*yum install -y libxcb*yum install -y libXi*yum install -y libXi*686*yum install -y make*yum install -y net-tools*yum install -y nfs-utils*yum install -y sysstat*yum install -y smartmontools*yum install -y unixODBC*yum install -y unixODBC-devel*yum install -y unixODBC*686*yum install -y unixODBC-devel*686*yum install -y gcc-*yum install -y gcc-c++*yum install -y elfutils-libelf-devel
特别说明:RHEL7.2对于Oracle11.2.0.4的认证是后认证(11.2.0.4先于Redhat7.2发布)的,compat-libstdc++-33这个包11.2.0.4安装需要,但是Redhat7.2自带的包中不存在,所以需要从其他版本获得这个包,并手动安装。
两个包可以从以下地址获取
下载完后执行以下命令完成安装
rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm rpm -ivh compat-libstdc++-33-3.2.3-72.el7.i686.rpm
执行以下命令检查依赖包安装情况
rpm -q binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc glibc-devel glibc-devel ksh libgcc libgcc libs libstdc++ libstdc++ libstdc++-devel libstdc++ libaio libaio libaio-devel libaio-devel libXtst libXtst libX11 libX11 libXau libXau libxcb libxcb libXi libXi make net-tools nfs-utils sysstat smartmontools unixODBC unixODBC-devel unixODBC unixODBC-devel gcc gcc-c++ elfutils-libelf-devel
如果有no install
请安装好再进行下一步。
系统总共挂载了5块共享存储盘,各盘存储情况如下
路径 | 大小 | 用途 |
---|---|---|
/dev/sdb | 2G | vote(投票) |
/dev/sdc | 2G | vote(投票) |
/dev/sdd | 2G | vote(投票) |
/dev/sde | 20G | arch(归档) |
/dev/sdf | 40G | data(数据) |
可以用命令fdisk -l
查看具体情况
因为存储都是共享的,所以分区操作在任一节点上操作即可
对每个盘进行分区,以/dev/sde
为例,输入命令fdisk /dev/sde
,依次输入n->p->(一路默认到底)->w
。
# fdisk /dev/sdeCommand (m for help): nPartition type: p primary (0 primary, 0 extended, 4 free) e extendedSelect (default p): pPartition number (1-4, default 1): First sector (2048-43548671, default 2048): Using default value 2048Last sector, +sectors or +size{K,M,G} (2048-43548671, default 43548671): Using default value 43548671Partition 1 of type Linux and of size 20.8 GiB is setCommand (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.
通过命令/usr/lib/udev/scsi_id -g -u /dev/sdxxx
查看磁盘wwid,因为是共享存储,每个节点看到的wwid都是一样的,udev通过规则文件,给磁盘设置权限,让grid用户有权限操作磁盘。
# /usr/lib/udev/scsi_id -g -u /dev/sdb1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00# /usr/lib/udev/scsi_id -g -u /dev/sdc1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530# /usr/lib/udev/scsi_id -g -u /dev/sdd1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e# /usr/lib/udev/scsi_id -g -u /dev/sde1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903# /usr/lib/udev/scsi_id -g -u /dev/sdf1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e
根据查询到的wwid,创建规则文件/etc/udev/rules.d/99-asmdevices.rules
内容如下,RESULT就是上面查到的wwid,每一块盘创建一条记录。
ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00", SYMLINK+="asmdisk001", OWNER="grid", GROUP="asmadmin", MODE="0660"ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530", SYMLINK+="asmdisk002", OWNER="grid", GROUP="asmadmin", MODE="0660"ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e", SYMLINK+="asmdisk003", OWNER="grid", GROUP="asmadmin", MODE="0660"ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903", SYMLINK+="asmdisk004", OWNER="grid", GROUP="asmadmin", MODE="0660"ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e", SYMLINK+="asmdisk005", OWNER="grid", GROUP="asmadmin", MODE="0660"
partprobeudevadm control --reload-rulesudevadm trigger --type=devices --action=change
如果是用名称进行绑定,在执行规则文件之前,一定要用partprobe
命令进行更新磁盘信息
如果目录/dev/
下生成asmdisk*
软链接,则说明执行成功
# cd /dev# ll asmdisk*lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk001 -> sdblrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk002 -> sdclrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk003 -> sddlrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk004 -> sdelrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk005 -> sdf
此时查看设备权限,正常的话权限变为660(rw-rw----)
,拥有者变为 grid:asmadmin
# ls -l /dev/sd*brw-rw----. 1 grid asmadmin 8, 16 Sep 26 10:38 /dev/sdbbrw-rw----. 1 grid asmadmin 8, 32 Sep 26 10:38 /dev/sdcbrw-rw----. 1 grid asmadmin 8, 48 Sep 26 10:38 /dev/sddbrw-rw----. 1 grid asmadmin 8, 64 Sep 26 10:38 /dev/sdebrw-rw----. 1 grid asmadmin 8, 80 Sep 26 10:38 /dev/sdf
# ll /u01/soft/total 9797256-rw-r--r--@ 1 grid oinstall 1395582860 9 26 14:13 p13390677_112040_Linux-x86-64_1of7.zip-rw-r--r--@ 1 grid oinstall 1151304589 9 26 13:52 p13390677_112040_Linux-x86-64_2of7.zip-rw-r--r--@ 1 grid oinstall 1205251894 9 20 01:57 p13390677_112040_Linux-x86-64_3of7.zip-rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip-rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
# su - grid# cd /u01/soft# unzip *.zip
# su - root# cd /u01/soft/grid/rpm# rpm -ivh cvuqdisk-1.0.9-1.rpm
将文件/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm
拷贝至其他节点tmp目录下,可以用scp命令拷贝
其他节点执行以下操作
# su - root# cd /tmp# scp grid@rac1:/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm .# rpm -ivh cvuqdisk-1.0.9-1.rpm
cvu的包安装完成后在节点一以grid用户启动grid安装。
在节点一登录grid用户,安装数据库集群软件
# su - grid$ export DISPLAY=:1.0$ xhost +$ cd /u01/soft/grid$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
export DISPLAY=:1.0
设置图形界面显示到哪个端口上,1.0是vnc的监听端口,./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
这样写的目的是安装程序在显示上会有一些bug,导致显示不全或者按钮点不了,这样启动可以避免该情况。
Simplified Chinese
1521
可以修改,不启动GNSAdd
增加集群节点此时如果还未配置节点信任,点击下一步会报[INS-30132]
的错误
点击界面上SSH Connectivity
配置互信
输入rac2 grid用户的密码,点击setup,如果提示配置成功,可以点击test测试是否现在是互信状态,如果未setup就test,会报错。
Do Not Use
Oracle ASM
Disk Group Name
输入CRS
,点击Change Discovery Path
,输入/dev/asmdisk*
,这个文件名在我们前面配置存储规则的时候指定。根据前面的规划,选择 asmdisk001/002/003
三块2G的盘
这里由于系统自带了ksh,可以忽略pdksh的缺包问题;ASM磁盘设备由于使用裸盘,已确认共享,也可以忽略。
如果这里有错误,可以根据提示解决完毕后点击Check Again
进行重新检查如果确认错误可以忽略,把
Ignore All
的选项勾上进行下一步
执行第一个脚本/u01/app/oraInventory/orainstRoot.sh
一般不会有问题。
执行第二个脚本/u01/app/11.2.0/grid/root.sh
时报以下错误
Adding Clusterware entries to inittabohasd failed to startFailed to start the Clusterware. Last 20 lines of the alert log follow: 2019-09-27 12:54:19.483:
这个地方是RHEL7.x和11.2.0.4.0兼容性问题。因为RHEL7使用systemd而不是initd运行进程和重启进程,而root.sh通过传统的initd运行ohasd进程。解决方法就是在RHEL7中ohasd需要被设置为一个服务,并且在运行脚本root.sh之前启动。
停掉root.sh脚本,以root用户执行以下脚本
# touch /usr/lib/systemd/system/ohas.service# chmod 777 /usr/lib/systemd/system/ohas.service# cat >>/usr/lib/systemd/system/ohas.service <<EOF[Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.targetEOF# systemctl daemon-reload# systemctl enable ohas.service# systemctl start ohas.service# systemctl status ohas.service● ohas.service - Oracle High Availability Services Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-09-27 00:36:06 CST; 4s ago Main PID: 5730 (init.ohasd) CGroup: /system.slice/ohas.service └─5730 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simpleSep 27 00:36:06 rac2 systemd[1]: Started Oracle High Availability Services.Sep 27 00:36:06 rac2 systemd[1]: Starting Oracle High Availability Services...
重新执行root.sh
,如果此时还是报错可能是root.sh脚本创建了init.ohasd之后,ohas.service没有马上启动,解决方法参考以下,当运行root.sh时,一直刷新/etc/init.d,直到出现init.ohasd文件,马上手动启动ohas.service服务命令
systemctl start ohas.service
当两个节点显示以下信息时说明脚本执行成功
CRS-2672: Attempting to start 'ora.asm' on 'rac1'CRS-2676: Start of 'ora.asm' on 'rac1' succeededCRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1'CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeededConfigure Oracle Grid Infrastructure for a Cluster ... succeeded
当所有节点执行完后在任意节点执行以下命令查看各节点状态
# /u01/app/11.2.0/grid/bin/crsctl stat res -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.CRS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 --------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1
在集群验证过程中提示scan验证问题,这是由于我们采用了hosts解析,只要保证所有节点解析definescan正常即可
点击Next
,选择忽略该错误即可
cd /u01/apprm -rf *mkdir -p /u01/app/11.2.0/gridmkdir -p /u01/app/oracle/product/11.2.0/db_1mkdir -p /u01/softmkdir -p /u01/app/oracledbmkdir -p /u01/app/oracledb/product/11.2.0/db_1chown -R grid:oinstall /u01/app/oraclechown -R grid:oinstall /u01chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1chown -R oracle:oinstall /u01/app/oracledbchown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1chown -R grid:oinstall /u01/app/11.2.0/gridchmod -R 775 /u01/cd /etc/rm -rf ora*# 移除之前的配置 /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/crs/install/roothas.pl -deconfig -force
在节点一登录grid用户,安装数据库集群软件
# su - oracle$ export DISPLAY=:1.0$ xhost +$ cd /u01/soft/database$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
和集群安装一样,这里需要配置互信,输入rac2 oracle用户的密码,点击setup,执行完后点击test测试,测试通过就可以下一步
dba
Error in invoking target 'agent nhms' of makefile....
这里也是由于RHEL7.x与11.2.0.4兼容性的一个bug,解决方法(安装节点执行即可):
su - oraclecd $ORACLE_HOME/sysman/libvi ins_emagent.mk #搜索关键字 MK_EMAGENT_NMECTL,添加 -lnnz11,如下#===========================# emdctl#===========================$(SYSMANBIN)emdctl: $(MK_EMAGENT_NMECTL) -lnnz11
修改完毕后回到安装界面Retry
asmc
,点击Create创建diskgroupsu -export DISPLAY=:1.0xhost +su - grid export DISPLAY=:1.0xhost +asmca
External
如果这边看不到,可能是窗口太小了,需要鼠标点击右下角进行放大。
External
最后ASM磁盘组状态如下
点击mount all
点击Exit退出
dbca
su -export DISPLAY=:1.0xhost +su - oracle export DISPLAY=:1.0xhost +dbca
将归档放入+ARCH中
进程数调整为1000
字符集选择UTF-8
连接模式默认即可
以下操作需要在所有节点上完成
# ll /u01/soft-rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip-rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
export GRID_HOME=/u01/app/11.2.0/gridexport ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1mv $GRID_HOME/OPatch $GRID_HOME/OPatch_bakmv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch_bakunzip p6880880_112000_Linux-x86-64.zip -d $GRID_HOMEunzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOMEchown -R grid:oinstall $GRID_HOME/OPatchchown -R oracle:oinstall $ORACLE_HOME/OPatch
# su - grid$ cd /u01/soft$ unzip p29255947_112040_Linux-x86-64.zip$ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/ocm.rspProvide your email address to be informed of security issues, install andinitiate Oracle Configuration Manager. Easier for you if you use your MyOracle Support Email address/User Name.Visit http://www.oracle.com/support/policies.html for details.Email address/User Name: You have not provided an email address for notification of security issues.Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y^HThe OCM configuration response file (/tmp/ocm.rsp) was successfully created.$ ll /tmp/ocm.rsp-rw-r--r-- 1 grid oinstall 621 Sep 28 16:27 /tmp/ocm.rsp
# su -# export PATH=/u01/app/11.2.0/grid/OPatch:$PATH# opatch auto ./29255947/ -ocmrf /tmp/ocm.rspExecuting /u01/app/11.2.0/grid/perl/bin/perl /u01/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir . -patchn 29255947 -ocmrf /tmp/ocm.rsp -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_paramsThis is the main log file: /u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.logThis file will show your detected configuration and all the steps that opatchauto attempted to do on your system:/u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.report.log2019-09-28 16:32:59: Starting Clusterware Patch SetupUsing configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_paramsStopping RAC /u01/app/oracle_base/product/11.2.0/db_1 ...Stopped RAC /u01/app/oracle_base/product/11.2.0/db_1 successfullypatch ./29255947/29141201/custom/server/29141201 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 patch ./29255947/29141056 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 Stopping CRS...Stopped CRS successfullypatch ./29255947/29141201 apply successful for home /u01/app/11.2.0/grid patch ./29255947/29141056 apply successful for home /u01/app/11.2.0/grid patch ./29255947/28729245 apply successful for home /u01/app/11.2.0/grid Starting CRS...Installing Trace File AnalyzerCRS-4123: Oracle High Availability Services has been started.Starting RAC /u01/app/oracle_base/product/11.2.0/db_1 ...Started RAC /u01/app/oracle_base/product/11.2.0/db_1 successfullyopatch auto succeeded.
当出现opatch auto succeeded
时表示补丁安装成功。如果补丁安装失败,可以根据控制台输出找到日志文件,如上面日志文件位于/u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.log
至此,rac数据库安装完毕。
联系客服