Oracle数据库数据恢复、性能优化

找回密码
注册
搜索
热搜: 活动 交友 discuz
发新帖

2135

积分

502

好友

184

主题
1#
发表于 2012-3-10 00:52:40 | 查看: 6898| 回复: 5
Question:
11gr2 with asm to asm,主备采用存储复制

有做过11gR2(asm),  rac 到单机采用存储复制做DR实施的没?
下载专业ORACLE数据库恢复工具PRM-DUL  For Oracle http://www.parnassusdata.com/zh-hans/emergency-services

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638  QQ: 47079569   
2#
发表于 2012-3-10 00:53:41
Answer:

How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems (Doc ID 1062983.1)




Migrate the product database to test server, the public ip need to change to a different subnet and the private ip need not to be modify.

to modify the public ip subnet and ip information in the OCR, need perform the following two command when the CRS is running:

% $ORA_CRS_HOME/bin/oifcfg delif -global eth0
% $ORA_CRS_HOME/bin/oifcfg setif -global eth0/10.2.166.0:public

but the GI in test server has been stopped, so it is can not perform the above commands.

回复 只看该作者 道具 举报

3#
发表于 2012-3-10 01:30:08
ODM Data:
  1. How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems

  2. Applies to:
  3. Oracle Server - Enterprise Edition - Version: 11.2.0.1.0 to 11.2.0.2 - Release: 11.2 to 11.2
  4. Information in this document applies to any platform.
  5. Goal
  6. It is not possible to directly restore a manual or automatic OCR backup if the OCR is located in an ASM disk group. This is caused by the fact that the command 'ocrconfig -restore' requires ASM to be up & running in order to restore an OCR backup to an ASM disk group. However, for ASM to be available, the CRS stack must have been successfully started. For the restore to succeed, the OCR also must not be in use (r/w), i.e. no CRS daemon must be running while the OCR is being restored.

  7. A description of the general procedure to restore the OCR can be found in the  documentation, this document explains how to recover from a complete loss of the ASM disk group that held the OCR and Voting files in a 11gR2 Grid environment.
  8. Solution
  9. When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated:

  10.     the Oracle Cluster Registry file (OCR)
  11.     the Voting file(s)
  12.     the shared SPFILE for the ASM instances

  13. The following example assumes that the OCR was located in a single disk group used exclusively for CRS. The disk group has just one disk using external redundancy.

  14. Since the CRS disk group has been lost the CRS stack will not be available on any node.

  15. The following settings used in the example would need to be replaced according to the actual configuration:

  16. GRID user:                       oragrid
  17. GRID home:                       /u01/app/11.2.0/grid ($CRS_HOME)
  18. ASM disk group name for OCR:     CRS
  19. ASM/ASMLIB disk name:            ASMD40
  20. Linux device name for ASM disk:  /dev/sdh1
  21. Cluster name:                    rac_cluster1
  22. Nodes:                           racnode1, racnode2


  23. This document assumes that the name of the OCR diskgroup remains unchanged, however there may be a need to use a different diskgroup name, in which case the name of the OCR diskgroup would have to be modified in /etc/oracle/ocr.loc across all nodes prior to executing the following steps.

  24. 1. Locate the latest automatic OCR backup

  25. When using a non-shared CRS home, automatic OCR backups can be located on any node of the cluster, consequently all nodes need to be checked for the most recent backup:

  26. $ ls -lrt $CRS_HOME/cdata/rac_cluster1/
  27. -rw------- 1 root root 7331840 Mar 10 18:52 week.ocr
  28. -rw------- 1 root root 7651328 Mar 26 01:33 week_.ocr
  29. -rw------- 1 root root 7651328 Mar 29 01:33 day.ocr
  30. -rw------- 1 root root 7651328 Mar 30 01:33 day_.ocr
  31. -rw------- 1 root root 7651328 Mar 30 01:33 backup02.ocr
  32. -rw------- 1 root root 7651328 Mar 30 05:33 backup01.ocr
  33. -rw------- 1 root root 7651328 Mar 30 09:33 backup00.ocr

  34. 2. Make sure the Grid Infrastructure is shutdown on all nodes

  35. Given that the OCR diskgroup is missing, the GI stack will not be functional on any node, however there may still be various daemon processes running.  On each node shutdown the GI stack using the force (-f) option:
  36. # $CRS_HOME/bin/crsctl stop crs -f


  37. 3. Start the CRS stack in exclusive mode

  38. On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.

  39. 11.2.0.1:
  40. # $CRS_HOME/bin/crsctl start crs -excl
  41. ...
  42. CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
  43. CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
  44. CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
  45. CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded

  46. Please note:
  47. This document assumes that the CRS diskgroup was completely lost, in which  case the CRS daemon (resource ora.crsd) will terminate again due to the inaccessibility of the OCR - even if above message indicates that the start succeeded.
  48. If this is not the case - i.e. if the CRS diskgroup is still present (but corrupt or incorrect) the CRS daemon needs to be shutdown manually using:
  49. # $CRS_HOME/bin/crsctl stop res ora.crsd -init

  50. otherwise the subsequent OCR restore will fail.

  51. 11.2.0.2:
  52. # $CRS_HOME/bin/crsctl start crs -excl -nocrs
  53. CRS-4123: Oracle High Availability Services has been started.
  54. ...
  55. CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'auw2k3'
  56. CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
  57. CRS-2676: Start of 'ora.drivers.acfs' on 'racnode1' succeeded
  58. CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
  59. CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode1' succeeded
  60. CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
  61. CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded

  62. IMPORTANT:
  63. A new option '-nocrs' has been introduced with  11.2.0.2, which prevents the start of the ora.crsd resource. It is vital that this option is specified, otherwise the failure to start the ora.crsd resource will tear down ora.cluster_interconnect.haip, which in turn will cause ASM to crash.


  64. 4. Label the CRS disk for ASMLIB use

  65. If using ASMLIB the disk to be used for the CRS disk group needs to stamped first, as user root do:
  66. # /usr/sbin/oracleasm createdisk ASMD40 /dev/sdh1
  67. Writing disk header: done
  68. Instantiating disk: done


  69. 5. Create the CRS diskgroup via sqlplus

  70. The disk group can now be (re-)created via sqlplus from the grid user. The compatible.asm attribute must be set to 11.2 in order for the disk group to be used by CRS:
  71. $ sqlplus / as sysasm
  72. SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:47:24 2010
  73. Copyright (c) 1982, 2009, Oracle. All rights reserved.
  74. Connected to:
  75. Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
  76. With the Real Application Clusters and Automatic Storage Management options

  77. SQL> create diskgroup CRS external redundancy disk 'ORCL:ASMD40' attribute 'COMPATIBLE.ASM' = '11.2';

  78. Diskgroup created.
  79. SQL> exit


  80. 6. Restore the latest OCR backup

  81. Now that the CRS disk group is created & mounted the OCR can be restored - must be done as the root user:
  82. # cd $CRS_HOME/cdata/rac_cluster1/
  83. # $CRS_HOME/bin/ocrconfig -restore backup00.ocr


  84. 7. Start the CRS daemon on the current node (11.2.0.1 only !)

  85. Now that the OCR has been restored the CRS daemon can be started, this is needed to recreate the Voting file. Skip this step for 11.2.0.2.0.
  86. # $CRS_HOME/bin/crsctl start res ora.crsd -init
  87. CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
  88. CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded


  89. 8. Recreate the Voting file

  90. The Voting file needs to be initialized in the CRS disk group:
  91. # $CRS_HOME/bin/crsctl replace votedisk +CRS
  92. Successful addition of voting disk 00caa5b9c0f54f3abf5bd2a2609f09a9.
  93. Successfully replaced voting disk group with +CRS.
  94. CRS-4266: Voting file(s) successfully replaced


  95. 9. Recreate the SPFILE for ASM (optional)


  96. Please note:

  97. If you are
  98. - not using an  SPFILE for ASM
  99. - not using a shared SPFILE for ASM
  100. - using a shared SPFILE not stored in ASM (e.g. on cluster file system)
  101. this step possibly should be skipped.

  102. Also use extra care in regards to the asm_diskstring parameter as it impacts the discovery of the voting disks.

  103. Please verify the previous settings using the ASM alert log.

  104. Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup parameters - these may vary from the example below. If in doubt consult the ASM alert log  as the ASM instance startup should list all non-default parameter values. Please note the last startup of ASM (in step 2 via CRS start) will not have used an SPFILE, so a startup prior to the loss of the CRS disk group would need to be located.
  105. *.asm_power_limit=1
  106. *.diagnostic_dest='/u01/app/oragrid'
  107. *.instance_type='asm'
  108. *.large_pool_size=12M
  109. *.remote_login_passwordfile='EXCLUSIVE'
  110. Now the SPFILE can be created using this PFILE:
  111. $ sqlplus / as sysasm
  112. SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:52:39 2010
  113. Copyright (c) 1982, 2009, Oracle. All rights reserved.

  114. Connected to:
  115. Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
  116. With the Real Application Clusters and Automatic Storage Management options

  117. SQL> create spfile='+CRS' from pfile='/tmp/asm_pfile.ora';

  118. File created.
  119. SQL> exit


  120. 10. Shutdown CRS

  121. Since CRS is running in exclusive mode, it needs to be shutdown  to allow CRS to run on all nodes again. Use of the force (-f) option may be required:
  122. # $CRS_HOME/bin/crsctl stop crs -f
  123. ...
  124. CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'auw2k3' has completed
  125. CRS-4133: Oracle High Availability Services has been stopped.


  126. 11. Rescan ASM disks

  127. If using ASMLIB rescan all ASM disks on each node as the root user:
  128. # /usr/sbin/oracleasm scandisks
  129. Reloading disk partitions: done
  130. Cleaning any stale ASM disks...
  131. Scanning system for ASM disks...
  132. Instantiating disk "ASMD40"


  133. 12. Start CRS
  134. As the root user submit the CRS startup on all cluster nodes:
  135. # $CRS_HOME/bin/crsctl start crs
  136. CRS-4123: Oracle High Availability Services has been started.


  137. 13. Verify CRS

  138. To verify that CRS is fully functional again:
  139. # $CRS_HOME/bin/crsctl check cluster -all
  140. **************************************************************
  141. racnode1:
  142. CRS-4537: Cluster Ready Services is online
  143. CRS-4529: Cluster Synchronization Services is online
  144. CRS-4533: Event Manager is online
  145. **************************************************************
  146. racnode2:
  147. CRS-4537: Cluster Ready Services is online
  148. CRS-4529: Cluster Synchronization Services is online
  149. CRS-4533: Event Manager is online
  150. **************************************************************

  151. # $CRS_HOME/bin/crsctl status resource -t
  152. ...
复制代码

回复 只看该作者 道具 举报

4#
发表于 2012-6-1 10:05:47

做了从ASM+RAC到单机的存储复制

刚做完RAC到单机的存储复制容灾,用的HDS的HUR。

单机端正常打开,一切正常;

不过客户提个要求,要对比两个库到底差了多少多长时间的数据,因为HUR现在是30秒同步一次,加上
断开SI的时间。

客户提出要求要对比两个库到底差了多少时间数据。这个真的搞不出来。
只想了一个办法,建个测试表,用存储过程每秒一次插入时间戳数据这种方式测试。或者停机验证下在mount状态的数据文件的SCN号

有没有更好的办法验证数据是否一致?或者是从别的方向入手来验证数据是否一致?

回复 只看该作者 道具 举报

5#
发表于 2012-6-1 17:59:37

期待回复

期待回复啊

回复 只看该作者 道具 举报

6#
发表于 2012-6-1 19:29:56
Oracle中比对2张表之间数据是否一致的几种方法 http://www.oracledatabase12g.com ... -tables-method.html

回复 只看该作者 道具 举报

您需要登录后才可以回帖 登录 | 注册

QQ|手机版|Archiver|Oracle数据库数据恢复、性能优化

GMT+8, 2024-11-15 17:43 , Processed in 0.056447 second(s), 22 queries .

Powered by Discuz! X2.5

© 2001-2012 Comsenz Inc.

回顶部
TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569