oracle 10g rac add node

目的就是为现有的一个10205双节点RAC集群再添加一个node,从当前的两个节点扩充至三个节点

1.环境的检查

首先查看双节点上os、disk space等信息。

当前rac集群的一些信息。

[oracle@node1 ~]$ olsnodes -p
node1 node1-priv
node2 node2-priv
[oracle@node1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....a1.inst application ONLINE ONLINE node1
ora....a2.inst application ONLINE ONLINE node2
ora.aaa.db application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
[oracle@node1 ~]$

1.1 os版本
[oracle@node1 ~]$ echo $ORACLE_HOME
/opt/app/oracle/product/10.2.0/db_1

[oracle@node1 ~]$ cat /etc/issue
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Kernel r on an m

再准备一个os,版本为oel5u5。

1.2 磁盘空间
查看磁盘空间的信息,大约需要20G的磁盘空间

1.3 网络环境
hosts文件
后续加入新的节点,host应当变成如下:

#public
192.168.56.105 node1.localdomain node1
192.168.56.106 node2.localdomain node2
192.168.56.133 node3.localdomain node3

#private
172.10.0.1 node1-priv.localdomain node1-priv
172.10.0.2 node2-priv.localdomain node2-priv
172.10.0.3 node3-priv.localdomain node3-priv

#virtual
192.168.56.115 node1-vip.localdomain node1-vip
192.168.56.116 node2-vip.localdomain node2-vip
192.168.56.134 node3-vip.localdomain node3-vip

上述的hosts内容需要都体现在三个节点中。

添加两块网卡:一块用于public连接,另外一块用于节点之间的private连接

1.4 准备基本的OS安装
注意不要缺少相应的rpm包。

1.5 将共享存储接到第三个节点上

1.6 raw device的绑定
需要编辑/etc/udev/rule.d/60-raw.rules 配置文件来达成。

#ocr
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
#votingdisk
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
#controlfile
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdi1", RUN+="/bin/raw /dev/raw/raw8 %N"
#spfile and pwdfile
ACTION=="add", KERNEL=="sdj1", RUN+="/bin/raw /dev/raw/raw9 %N"
ACTION=="add", KERNEL=="sdk1", RUN+="/bin/raw /dev/raw/raw10 %N"
#redo
ACTION=="add", KERNEL=="sdl1", RUN+="/bin/raw /dev/raw/raw11 %N"
ACTION=="add", KERNEL=="sdm1", RUN+="/bin/raw /dev/raw/raw12 %N"
ACTION=="add", KERNEL=="sdn1", RUN+="/bin/raw /dev/raw/raw13 %N"

#system,sysaux,undotbs
ACTION=="add", KERNEL=="sdo1", RUN+="/bin/raw /dev/raw/raw14 %N"
ACTION=="add", KERNEL=="sdp1", RUN+="/bin/raw /dev/raw/raw15 %N"
ACTION=="add", KERNEL=="sdq1", RUN+="/bin/raw /dev/raw/raw16 %N"
ACTION=="add", KERNEL=="sdr1", RUN+="/bin/raw /dev/raw/raw17 %N"
#temp,user
ACTION=="add", KERNEL=="sds1", RUN+="/bin/raw /dev/raw/raw18 %N"
ACTION=="add", KERNEL=="sdt1", RUN+="/bin/raw /dev/raw/raw19 %N"
#redo added
ACTION=="add", KERNEL=="sdu1", RUN+="/bin/raw /dev/raw/raw20 %N"

#other raw
ACTION=="add", KERNEL=="sdu2", RUN+="/bin/raw /dev/raw/raw21 %N"
ACTION=="add", KERNEL=="sdu3", RUN+="/bin/raw /dev/raw/raw22 %N"
ACTION=="add", KERNEL=="sdu4", RUN+="/bin/raw /dev/raw/raw23 %N"

ACTION=="add", KERNEL=="sdw2", RUN+="/bin/raw /dev/raw/raw24 %N"
ACTION=="add", KERNEL=="sdx3", RUN+="/bin/raw /dev/raw/raw25 %N"
ACTION=="add", KERNEL=="sdy4", RUN+="/bin/raw /dev/raw/raw26 %N"
#config for the owner and privs
ACTION=="add", KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw3", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw4", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw5", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw6", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw7", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw8", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw9", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw10", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw11", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw12", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw13", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw14", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw15", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw16", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw17", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw18", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw19", OWNER="oracle", GROUP="oinstall", MODE="660"

ACTION=="add", KERNEL=="raw20", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw21", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw22", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw23", OWNER="oracle", GROUP="oinstall", MODE="660"

ACTION=="add", KERNEL=="raw24", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw25", OWNER="oracle", GROUP="oinstall", MODE="660"
ACTION=="add", KERNEL=="raw26", OWNER="oracle", GROUP="oinstall", MODE="660"

[root@node3 ~]# start_udev
Starting udev: [ OK ]

查看配置的raw device信息。

[root@node3 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
/dev/raw/raw6: bound to major 8, minor 97
/dev/raw/raw7: bound to major 8, minor 113
/dev/raw/raw8: bound to major 8, minor 129
/dev/raw/raw9: bound to major 8, minor 145
/dev/raw/raw10: bound to major 8, minor 161
/dev/raw/raw11: bound to major 8, minor 177
/dev/raw/raw12: bound to major 8, minor 193
/dev/raw/raw13: bound to major 8, minor 209
/dev/raw/raw14: bound to major 8, minor 225
/dev/raw/raw15: bound to major 8, minor 241
/dev/raw/raw16: bound to major 65, minor 1
/dev/raw/raw17: bound to major 65, minor 17
/dev/raw/raw18: bound to major 65, minor 33
/dev/raw/raw19: bound to major 65, minor 49
/dev/raw/raw20: bound to major 65, minor 65
/dev/raw/raw21: bound to major 65, minor 66
/dev/raw/raw22: bound to major 65, minor 67
/dev/raw/raw23: bound to major 65, minor 68
[root@node3 ~]#

重启一下node3,看看启动之后,raw device中的绑定信息是否会丢掉?
重启之后,发现绑定的raw信息并没有丢失掉。

1.7 目录的准备和授权

CRS_HOME
mkdir -p /opt/app/oracle/product/10.2.0/crs

ORACLE_BASE
mkdir -p /opt/app/oracle

ORACLE_HOME
mkdir -p /opt/app/oracle/product/10.2.0/db_1

oraInvemtory
mkdir -p /opt/app/oracle/oraInventory

–授权–

chown -R oracle:oinstall /opt/app/oracle/product/10.2.0/crs
chown -R oracle:oinstall /opt/app/oracle
chown -R oracle:oinstall /opt/app/oracle/product/10.2.0/db_1
chown -R oracle:oinstall /opt/app/oracle/oraInventory

–chmod—

chmod -R 775 /opt/app/oracle/product/10.2.0/crs
chmod -R 775 /opt/app/oracle
chmod -R 775 /opt/app/oracle/product/10.2.0/db_1
chmod -R 775 /opt/app/oracle/oraInventory

1.8 .bash_profile环境变量的准备

随便找node1或者node2的.bash_profile文件,拷贝粘贴。

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_BASE=/opt/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=aaa1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
ORACLE_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs; export ORACLE_CRS_HOME
DBCA_RAW_CONFIG=/home/oracle/dbca_raw_config; export DBCA_RAW_CONFIG
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_CRS_HOME/bin
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

export PATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

1.9 rpm包的检查
—————————–
binutils-2.17.50.0.6-2.el5
compat-libstdc++-296-2.96-138
compat-libstdc++-33-3.2.3-61
elfutils-libelf-0.125-3.el5
elfutils-libelf-devel-0.125
gcc-4.1.1-52
gcc-c++-4.1.1-52
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
glibc-headers-2.5-12
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1-52.e15
libXp-1.0.0-8
make-3.81-1.1
openmotif-2.2.3
sysstat-7.0.0
unixODBC-2.2.11
unixODBC-devel-2.2.11
—————————–

1.10 参数检查
由于打了oracle-validated包,内核的检查相对简单和容易

(1)/etc/security/limits.conf

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# End of file
oracle soft nofile 131072
oracle hard nofile 131072
oracle soft nproc 131072
oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 50000000
oracle hard memlock 50000000

oracle-validated包已经帮你添加好了。

(2)/etc/pam.d/login
大部分配置信息都已经编辑ok了,再添加如下的信息即可。

#for rac install
session required pam_limits.so

(3)/etc/sysctl.conf
oracle-validated包已经搞定了大部分参数设置,只要依据是10g还是11g版本,注释相应的行即可。

(4)配置hangcheck-timer
编辑/etc/modprobe.conf,添加如下:

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
[root@node3 ~]# modprobe -v hangcheck-timer
insmod /lib/modules/2.6.18-194.el5xen/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180
[root@node3 ~]# modprobe -v hangcheck-timer
[root@node3 ~]#

1.11 ssh等价性的配置
10g RAC的安装还是需要手工的配置节点之间的等价性,而这个在11g RAC中已经可以在安装过程中自动配置完成了。
以oracle用户执行,在node3上生成一个key信息。
—————————————————————
[oracle@node3 ~]$ mkdir ~/.ssh
[oracle@node3 ~]$ pwd
/home/oracle
[oracle@node3 ~]$ chmod 700 ~/.ssh
[oracle@node3 ~]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
a4:79:ef:3f:75:c3:f3:7e:ad:fc:a2:27:99:03:83:13 oracle@node3

[oracle@node3 .ssh]$ pwd
/home/oracle/.ssh
[oracle@node3 .ssh]$ ls
id_rsa id_rsa.pub
—————————————————————-

在节点1上,将之前含有node1与node2共有认证key的authorized_keys文件scp拷贝至node3上。

[oracle@node1 .ssh]$ scp authorized_keys node3:/home/oracle/.ssh/

将node3的id_rsa.pub中的信息重定向到包含node1与node2认证信息的authorized_keys里面。

[oracle@node3 .ssh]$ cat id_rsa.pub >> authorized_keys

这样,node3上的这个authorized_keys包含node1,node2,node3三个主机的认证信息。将它拷贝替换node1,node2上的authorized_keys文件。
在三个节点分别测试一下等价性。
————————————————————-
[oracle@node2 ~]$ ssh node1 date
Wed Aug 14 09:53:29 CST 2013
[oracle@node2 ~]$ ssh node3 date
Wed Aug 14 09:53:51 CST 2013
[oracle@node2 ~]$ ssh node3
Last login: Wed Aug 14 09:50:13 2013 from node2.localdomain
[oracle@node3 ~]$ ssh node1 date
Wed Aug 14 09:53:40 CST 2013
[oracle@node3 ~]$ ssh node2 date
Wed Aug 14 09:53:36 CST 2013
————————————————————-

//为现有RAC集群中添加节点的主机准备工作暂时告一段落了。

1.12 CRS版本查询

[oracle@node1 ~]$ crsctl query crs softwareversion
CRS software version on node [node1] is [10.2.0.5.0]

1.13 rdbms和instance版本查询

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Wed Aug 14 11:16:34 2013

Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> set linesize 180
SQL>
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for Linux: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

1.14 preinstall check crs

正式安装之前,利用CRS介质中的runclufy.sh来检查三个节点之间是否满足安装要求。

[oracle@node1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2,node3 -verbose

2.添加新节点(CRS_HOME)
$ORACLE_CRS_HOME/oui/bin/addNode.sh

[oracle@node1 bin]$ ls
addLangs.sh addNode.sh attachHome.sh detachHome.sh lsnodes ouica.bat ouica.sh resource runConfig.sh runInstaller runInstaller.sh
[oracle@node1 bin]$ pwd
/opt/app/oracle/product/10.2.0/crs/oui/bin

./addNode.sh

报错:OUI-10009:There are no new nodes to add to this installation

解决:

将oracle用户的.bash_profile文件中,$ORACLE_CRS_HOME设置为$ORA_CRS_HOME

看来在环境变量的设置中,至少在10g rac上,CRS_HOME最好命名为$ORA_CRS_HOME

安装尾声的执行脚本

我是在node1上开始$ORA_CRS_HOME/oui/bin/addNode.sh的执行的。最后,需要在node3上以root用户运行orainstRoot.sh和root.sh。在node1上执行rootaddnode.sh

//在node3上
——————-
[root@node3 oraInventory]# ./orainstRoot.sh
Changing permissions of /opt/app/oracle/oraInventory to 770.
Changing groupname of /opt/app/oracle/oraInventory to oinstall.
The execution of the script is complete
———————

//在node1上
———————————————————
[root@node1 install]# ./rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 3: node3 node3-priv node3
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
/opt/app/oracle/product/10.2.0/crs/bin/srvctl add nodeapps -n node3 -A node3-vip .localdomain/255.255.255.0/eth0 -o /opt/app/oracle/product/10.2.0/crs
——————————————————–

//在node3上
————————
[root@node3 crs]# ./root.sh
WARNING: directory ‘/opt/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/opt/app/oracle/product’ is not owned by root
WARNING: directory ‘/opt/app/oracle’ is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw1,/dev/raw/raw2
OCR backup directory ‘/opt/app/oracle/product/10.2.0/crs/cdata/node’ does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/opt/app/oracle/product/10.2.0’ is not owned by root
WARNING: directory ‘/opt/app/oracle/product’ is not owned by root
WARNING: directory ‘/opt/app/oracle’ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
node1
node2
node3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)

————————

在node1上,通过crs_stat已经能够查询到新添加的node3了,其nodeapps(gsd,ons,vip)已经显示online了。

[oracle@node1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....a1.inst application ONLINE ONLINE node1
ora....a2.inst application ONLINE ONLINE node2
ora.aaa.db application ONLINE ONLINE node1
ora....E1.lsnr application ONLINE ONLINE node1
ora.node1.gsd application ONLINE ONLINE node1
ora.node1.ons application ONLINE ONLINE node1
ora.node1.vip application ONLINE ONLINE node1
ora....E2.lsnr application ONLINE ONLINE node2
ora.node2.gsd application ONLINE ONLINE node2
ora.node2.ons application ONLINE ONLINE node2
ora.node2.vip application ONLINE ONLINE node2
ora.node3.gsd application ONLINE ONLINE node3
ora.node3.ons application ONLINE ONLINE node3
ora.node3.vip application ONLINE ONLINE node3

obtain the remote port identifier

[oracle@node1 conf]$ more ons.config
localport=6113
remoteport=6200
loglevel=3
useocr=on
[oracle@node1 conf]$ pwd
/opt/app/oracle/product/10.2.0/crs/opmn/conf

将remoteport添加到node3中。

[oracle@node1 ~]$ racgons add_config node3:6200

//重启node3,确保node3上的CRS托管的nodeapps能够随着node3启动而正常启动

//详细的图形化界面参考截图
3.添加新的节点(oracle_home)

于crs_home的添加类似。进入$ORACLE_HOME/oui/bin目录当中,执行addNode.sh

在进行到第二步时,还是报错,类似:crs添加时遇到的:oui 10009 there are no new nodes to add..

解决:尝试使用$ORACLE_HOME/oui/bin/runInstaller.sh updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES=node1,node2″
更新一下当前的节点信息,然后继续执行./addNode.sh

问题解决。

//参考截图
/root.sh执行
[root@node3 db_1]# ./root.sh
Running Oracle 10g root.sh script…

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/app/oracle/product/10.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

//ORACLE_HOME添加完毕了。

4.添加节点(配置listener)

./netca

查看截图

5.添加节点(instance)

./dbca

查看截图

备注:在dbca添加instance的过程中报错了。原因是缺少给node3的一个undotbs,两个redo thread。(写在mapping file中)

解决:分配一个undotbs3,大小2G;一个redo3_2和redo3_3,大小50M;让所有的节点识别(物理层面–>raw device绑定)

//在每个节点上都添加上这三个raw的识别
编辑 /etc/udev/rules.d/60-raw.rules。添加undotbs3,redo3_2,redo3_3部分。

#other raw undotbs3,redo3_2,redo3_3
ACTION==”add”, KERNEL==”sdw1″, RUN+=”/bin/raw /dev/raw/raw24 %N”
ACTION==”add”, KERNEL==”sdx1″, RUN+=”/bin/raw /dev/raw/raw25 %N”
ACTION==”add”, KERNEL==”sdy1″, RUN+=”/bin/raw /dev/raw/raw26 %N”

//权限部分
ACTION==”add”, KERNEL==”raw24″, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660″
ACTION==”add”, KERNEL==”raw25″, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660″
ACTION==”add”, KERNEL==”raw26″, OWNER=”oracle”, GROUP=”oinstall”, MODE=”660″
//dbca_raw_config mapping files编辑

编辑dbca_raw_config文件,将新增的内容增加进去

//准备再次启动dbca,添加instance

–过程见截图–

详细的过程参考:oracle10gRAC_addNode