Oracle数据库数据恢复、性能优化

找回密码
注册
搜索
热搜: 活动 交友 discuz
发新帖

14

积分

0

好友

0

主题
1#
发表于 2012-3-15 00:00:59 | 查看: 9227| 回复: 12
11.2.0.3 RAC删除节点时一定要小心,具体步骤没什么可说的,按官方文档来就行。关键是第11步(删除Grid软件), 这一步如果不小心,RAC就会被废了。。。。详细内容如下:

==========================================================
[grid@rac2 bin]$ $ORACLE_HOME/deinstall/deinstall -local            --该命令用来删除grid软件(我需要delete node2,所以在node2上执行该脚本)

[grid@rac2 bin]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2012-03-14_05-01-17PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

。。。。。(略。。由于论坛字数限制)

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on
the local node after the execution completes on all the remote nodes.


Run the following command as the root user or the administrator on node "rac1".

/tmp/deinstall2012-03-14_05-01-17PM/perl/bin/perl -I/tmp/deinstall2012-03-14_05-01-17PM/perl/lib -I/tmp/deinstall2012-03-14_05-01-17PM/crs/install /tmp/deinstall2012-03-14_05-01-17PM/crs/install/rootcrs.pl -force
-deconfig -paramfile "/tmp/deinstall2012-03-14_05-01-17PM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2012-03-14_05-01-17PM/perl/bin/perl -I/tmp/deinstall2012-03-14_05-01-17PM/perl/lib -I/tmp/deinstall2012-03-14_05-01-17PM/crs/install /tmp/deinstall2012-03-14_05-01-17PM/crs/install/rootcrs.pl -force
-deconfig -paramfile "/tmp/deinstall2012-03-14_05-01-17PM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Press Enter after you finish running the above commands

<----------------------------------------

说明:
    该deinstall脚本尽然会检测rac1和rac2两个节点的信息,并要求在rac1和rac2上各执行一个脚本,然后按“回车”继续。。。。

激动人心的时刻到了,千万千万不要在第一个节点(rac1)上执行那个脚本,否则整个RAC环境就废了,rac1的那个脚本会将整个crs停掉,并将ocr和votingdisk的信息给clean掉。。。。同时会将/etc下的oha脚本删除,inittab中那个oha脚本启动信息也被删除。。。


   这时,只需要执行rac2上的那个脚本,然后按“回车”继续,但ORACLE会认为脚本未执行完,要求执行完所有脚本。。。
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on
the local node after the execution completes on all the remote nodes.


Press Enter after you finish running the above commands

<----------------------------------------


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on
the local node after the execution completes on all the remote nodes.


Press Enter after you finish running the above commands

<----------------------------------------


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on
the local node after the execution completes on all the remote nodes.


Press Enter after you finish running the above commands

<----------------------------------------


---------------------------------------->


    此时进入了死循环,执行其它脚本,则RAC被废掉,不执行其它脚本,则不让继续后面的操作。。。。没办法 ,只能ctrl+C中止这个deinstall脚本。

在11.2.0.1或11.2.0.2做过delete node的同学会知道,执行完脚本,按“回车”继续,后续工作其实是执行了一些删除grid软件相关目录的操作,如下图(11.2.0.2版本):
Remove the directory: /tmp/deinstall2011-03-18_06-29-58AM on node:
Removing Windows and .NET products configuration END
Oracle Universal Installer clean START
Detach Oracle home ‘/u01/app/11.2.0/grid’ from the central inventory on the local node : Done
Delete directory ‘/u01/app/11.2.0/grid’ on the local node : Done
Delete directory ‘/u01/app/oraInventory’ on the local node : Done
Delete directory ‘/u01/app/grid’ on the local node : Done
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END


    到此,办法来了,当ctrl+c 中止了deinstall脚本后,我们只需要手工rm -rf 相关的目录即可,最后别忘了update 一下inventory就行了。


最后,检查一下:
[grid@rac1 bin]$ cluvfy stage -post nodedel -n server2

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Node removal check passed

Post-check for node removal was successful.
[grid@rac1 bin]$




不亲自实验一下,完全按经验来,有时怎么死的都不知道啊。唉。




bwt:

    附件上传了整个操作步骤,有兴趣的同学可下载看看。

delete node from 11gr2 RAC.rar

22.6 KB, 下载次数: 1685

2#
发表于 2012-3-15 00:02:13
Good !

回复 只看该作者 道具 举报

3#
发表于 2013-6-3 15:03:42
呵呵 遇到同种问题,但并非这个 记录一下;运行最后脚本时,官方是让带了一个-local的参数,并且一再强调 若不带Local选项,整个集群将被删除,而给出的执行 脚本语句却没有这个参数,后来自己手工加上的。

回复 只看该作者 道具 举报

4#
发表于 2013-6-3 17:41:15
有句话说的真不错,自己不亲自试验下,怎么死的都不知道。

回复 只看该作者 道具 举报

5#
发表于 2013-6-4 16:41:58
感谢分享,这样的情况居然也存在,真让人吃惊。

回复 只看该作者 道具 举报

6#
发表于 2013-6-5 08:34:18
话说,兄弟的这个方法依据是什么? 从10g依赖,我一直添加删除节点,没有遇到过这种起概况,如果是11.2,请参考官方的做法再次测试《Oracle® Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) Part Number E16795-13》--------“Deleting Oracle RAC from a Cluster Node”,你可以仔细看下,跟你比 步骤完全不同…………………………

回复 只看该作者 道具 举报

7#
发表于 2013-6-5 12:12:48
lunar 发表于 2013-6-5 08:34
话说,兄弟的这个方法依据是什么? 从10g依赖,我一直添加删除节点,没有遇到过这种起概况,如果是11.2,请 ...

我仔细比对了syhnd的操作步骤和官方文档(《Oracle Real Application Clusters Administration and Deployment Guide  11g Release 2 (11.2) E16795-11》 + 《Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E16794-15》)并未发现有不一致的情况,可以看出syhnd是严格按照手册一步一步操作的。
存在异样的那一步在 Clusterware Administration and Deployment Guide中是这样写道的

  1. For a local home, deinstall the Oracle  Clusterware home from the node that
  2. you want to delete, as follows, by running the following command, where
  3. Grid_home is the path defined for the Oracle Clusterware home:
  4. $ Grid_home/deinstall/deinstall –local
  5. Caution: If you do not specify the -local  flag, then the command
  6. removes the Grid Infrastructure home from every node in the cluster.
复制代码
遗憾的是文档中并未提及该语句执行过程中的其它细节,不知能否上传一下rac1,rac2的rootcrs.pl的response文件deinstall_Ora11g_gridinfrahome1.rsp

回复 只看该作者 道具 举报

8#
发表于 2013-6-5 17:21:27
To delete a node from a cluster:

Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.

Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:

$ olsnodes -s -t
If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted, as follows:

Note:
Before you run this command, you must stop the EMAGENT, as follows:

$ emctl stop dbconsole
If you are using Oracle Clusterware 11g release 2 (11.2.0.1) or Oracle Clusterware 11g release 2 (11.2.0.2), then do not include the -deinstall flag when running the rootcrs.pl script.

# ./rootcrs.pl -deconfig -deinstall -force
If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting.

If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as follows:

# ./rootcrs.pl -deconfig -deinstall -force -lastnode
Caution:
Only use the -lastnode option if you are deleting all cluster nodes because that option causes the rootcrs.pl script to clear OCR and the voting disks of data.
Note:
If you do not use the -force option in the preceding command or the node you are deleting is not accessible for you to execute the preceding command, then the VIP resource remains running on the node. You must manually stop and remove the VIP resource using the following commands as root from any node that you are not deleting:
# srvctl stop vip -i vip_name -f
# srvctl remove vip -i vip_name -f
Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names, then separate the names with commas and surround the list in double quotation marks ("").

From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:

# crsctl delete node -n node_to_be_deleted
On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
{node_to_be_deleted}" CRS=TRUE -silent -local
On the node that you are deleting, depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:

If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete:

$ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
Manually delete the following files:

/etc/oraInst.loc
/etc/oratab
/etc/oracle/
/opt/ORCLfmap/
$OraInventory/
For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:

$ Grid_home/deinstall/deinstall –local
Caution:
If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.
On any node other than the node you are deleting, run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

$ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
{remaining_nodes_list}" CRS=TRUE -silent
Notes:
You must run this command a second time where ORACLE_HOME=ORACLE_HOME, and CRS=TRUE -silent is omitted from the syntax, as follows:

$ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME
"CLUSTER_NODES={remaining_nodes_list}"
If you have a shared Oracle Grid Infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.

Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

$ cluvfy stage -post nodedel -n node_list [-verbose]

回复 只看该作者 道具 举报

9#
发表于 2013-6-5 17:24:30
文档中“Deleting a Cluster Node on Linux and UNIX Systems”,这一章,总共8个步骤,他却说“关键是第11步”,其余的步骤是怎么来的呢?  

另外,如果没有完成的前面执行的结果和日志,如何判断上面那样的操作是合理的? 我真心没想明白……

回复 只看该作者 道具 举报

10#
发表于 2013-6-5 18:19:41
我之前的删除操作是和syhnd 一样的,但是在文档上看到的步骤却和lunar 一样。。。

回复 只看该作者 道具 举报

11#
发表于 2013-6-5 18:47:43
那你具体是执行的哪11个步骤啊? 前面的10个是啥啊? 怎么文档上就8个步骤呢?

回复 只看该作者 道具 举报

12#
发表于 2013-6-6 09:42:14
本帖最后由 clevernby 于 2013-6-6 09:45 编辑

在RAC环境中删除一个节点需要使用到2份文档。
其中
1)“Deleting Instances from Oracle RAC Databases”位于《Oracle Real Application Clusters Administration and Deployment Guide  11g Release 2 (11.2) E16795-11》手册的10.2.1章节。对应syhnd整理的操作过程中第一、第二步。操作的内容是“Using DBCA in Silent Mode to Delete Instances from Nodes”和“Verify that the instance has been removed from OCR”

2)“Removing Oracle RAC”位于《Oracle Real Application Clusters Administration and Deployment Guide  11g Release 2 (11.2) E16795-11》手册的10.2.2章节。对应syhnd整理的操作过程中的第三、四、五、六步骤。操作的内容是“disable and stop listener on the node you are deleting”、“update the inventory on the node you are deleting”、“For a nonshared home, deinstall the Oracle home from the node you are deleting”、“update the inventories  on any one of the remaining nodes”

3)“Deleting Nodes from the Cluster”位于《Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2) E16794-15》手册的4.2.2章节。该章节的第二至第八步与syhnd整理的操作过程中的第七至十三步一一对应。

由此可以看出syhnd的操作过程完全是遵循官方手册来的,但是却发生了预想之外的事情。建议syhnd开个SR弄清原因,否则至少在11.2.0.3 RAC环境中删除节点将变得不可控,怪吓人的。

回复 只看该作者 道具 举报

13#
发表于 2013-6-6 13:06:11
本帖最后由 anbob 于 2013-6-6 13:08 编辑

做过没有遇到你的问题呢?

比如三节点,删除node3
Update Oracle Inventory

[oracle@znode3 ~]$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={znode3}" -local  

Remove oracle database software
[oracle@znode3 deinstall]$ ./deinstall -local

回复 只看该作者 道具 举报

您需要登录后才可以回帖 登录 | 注册

QQ|手机版|Archiver|Oracle数据库数据恢复、性能优化

GMT+8, 2024-11-16 15:21 , Processed in 0.065410 second(s), 23 queries .

Powered by Discuz! X2.5

© 2001-2012 Comsenz Inc.

回顶部
TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569