- 最后登录
- 2023-8-16
- 在线时间
- 1686 小时
- 威望
- 2135
- 金钱
- 50532
- 注册时间
- 2011-10-12
- 阅读权限
- 200
- 帖子
- 5207
- 精华
- 39
- 积分
- 2135
- UID
- 2
|
2#
发表于 2012-3-3 20:47:36
"windows上面如何物理备份RAC 中 crs的ocr与voting disk?"
ODM Data:- OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) [ID 428681.1]
- Applies to:
- Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.3 - Release: 10.2 to 11.2
- Information in this document applies to any platform.
- Goal
- The goal of this note is to provide steps to add, remove, replace or move an Oracle Cluster Repository (OCR) or voting disk in Oracle Clusterware 10gR2, 11gR1 and 11gR2 environment. It will also provide steps to move OCR / voting and ASM devices from raw device to block device.
- This article is intended for DBA and Support Engineers who need to modify, or move OCR and voting disks files, customers who have an existing clustered environment deployed on a storage array and might want to migrate to a new storage array with minimal downtime.
- Typically, one would simply cp or dd the files once the new storage has been presented to the hosts. In this case, it is a little more difficult because:
- 1. The Oracle Clusterware has the OCR and voting disks open and is actively using them. (Both primary and mirrors)
- 2. There is an API provided for this function (ocrconfig, and crsctl), which is the appropriate interface than typical cp and/or dd commands.
- It is highly recommended to take a backup of the voting disk, and OCR device before making any changes.
- Oracle Cluster Registry (OCR) and Voting Disk Additional clarifications
- The following steps assume the cluster is setup using Oracle redundancy with 3 voting disks and 2 OCR.
- Solution
- ADD/REMOVE/REPLACE/MOVE OCR Device
- Note: You must be logged in as the root user, because root owns the OCR files. "crsctl -replace" command can only be issued when CRS is running, otherwise "PROT-1: Failed to initialize ocrconfig" will occur.
- Make sure there is a recent copy of the OCR file before making any changes:
- ocrconfig -showbackup
- If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:
- In 10.2
- # ocrconfig -export <OCR export_filename> -s online
- In 11.1 and 11.2
- # ocrconfig -manualbackup
- node1 2008/08/06 06:11:58 /crs/cdata/crs/backup_20080807_003158.ocr
- If you need to recover using this file, the following command can be used:
- # ocrconfig -import <OCR export_filename>
- From 11.2+, please also refer How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems Document 1062983.1
- To see whether OCR is healthy, run an ocrcheck, which should return with like below.
- # ocrcheck
- Status of Oracle Cluster Registry is as follows :
- Version : 2
- Total space (kbytes) : 497928
- Used space (kbytes) : 312
- Available space (kbytes) : 497616
- ID : 576761409
- Device/File Name : /dev/raw/raw1
- Device/File integrity check succeeded
- Device/File Name : /dev/raw/raw2
- Device/File integrity check succeeded
- Cluster registry integrity check succeeded
- For 11.2+, ocrcheck as root user should also show:
- Logical corruption check succeeded
- 1. To add an OCRMIRROR device when only OCR device is defined:
- To add an OCR mirror device, provide the full path including file name.
- 10.2 and 11.1:
- # ocrconfig -replace ocrmirror <filename>
- 11.2+: From 11.2 onwards, upto 4 ocrmirrors can be added
- # ocrconfig -add <filename>
- 2. To remove an OCR device:
- To remove an OCR device:
- 10.2 and 11.1:
- # ocrconfig -replace ocr
- 11.2+:
- # ocrconfig -delete <filename>
- * Once an OCR device is removed, ocrmirror device automatically changes to be OCR device.
- * It is not allowed to remove OCR device if only 1 OCR device is defined, the command will return PROT-16.
- To remove an OCR mirror device:
- 10.2 and 11.1:
- # ocrconfig -replace ocrmirror
- 11.2+:
- # ocrconfig -delete <ocrmirror filename>
- After removal, the old OCR/OCRMIRROR can be deleted if they are on cluster filesystem.
- 3. To replace or move the location of an OCR device:
- Note. 1. An ocrmirror must be in place before trying to replace the OCR device. The ocrconfig will fail with PROT-16, if there is no ocrmirror.
- 2. If an OCR device is replaced with a device of a different size, the size of the new device will not be reflected until the clusterware is restarted.
- 3. If OCR is on cluster file system, the new OCR or OCRMIRROR file must be touched first before replace command can be issued. Otherwise PROT-21: Invalid parameter will occur.
- 10.2 and 11.1:
- To replace the OCR device with <filename>, provide the full path including file name.
- # ocrconfig -replace ocr <filename>
- To replace the OCR mirror device with <filename>, provide the full path including file name.
- # ocrconfig -replace ocrmirror <filename>
- 11.2:
- The command is same for replace either OCR or OCRMIRRORs:
- # ocrconfig -replace <current filename> -replacement <new filename>
- eg:
- # ocrconfig -replace /cluster_file/ocr.dat -replacement +OCRVOTE
- Example Moving OCR from Raw Device to Block Device(pre 11.2)
- The OCR disk must be owned by root, in the oinstall group, and must have permissions set to 640. Provide at least 100 MB disk space for the OCR.
- In this example the OCR files will be on the following devices:
- /dev/raw/raw1
- /dev/raw/raw2
- For moving the OCR from raw device to block device there are two different ways. One, which requires a full cluster outage, and one with no outage. The offline method is recommended for 10.2 and earlier since a cluster outage is required anyways due to an Oracle bug, which prevents online addition and deletion of voting files. This bug is fixed in 11.1, so either online or offline method can be employed in 11.1 onwards.
- Method 1 (Online)
- If there are additional block devices of same or larger size available, one can perform 'ocrconfig -replace'.
- PROS: No cluster outage required. Run 2 commands and changes are reflected across the entire cluster.
- CONS: Need temporary additional block devices with 256MB in size. One can reclaim the storage pointed by the raw devices when the operation completes.
- On one node as root run:
- # ocrconfig -replace ocr /dev/sdb1
- # ocrconfig -replace ocrmirror /dev/sdc1
- For every ocrconfig or ocrcheck command a trace file to $CRS_Home/log/<hostname>/client directory is written. Below an example from the successful ocrconfig -replace ocr command.
- Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
- 2008-08-06 07:07:10.424: [ OCRCONF][3086866112]ocrconfig starts...
- 2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Successfully replaced OCR and set block 0
- 2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Exiting [status=success]...
- Now run ocrcheck to verify if the OCR is pointing to the block device and no error is returned.
- Status of Oracle Cluster Registry is as follows :
- Version : 2
- Total space (kbytes) : 497776
- Used space (kbytes) : 3844
- Available space (kbytes) : 493932
- ID : 576761409
- Device/File Name : /dev/sdb1
- Device/File integrity check succeeded
- Device/File Name : /dev/sdc2
- Device/File integrity check succeeded
- Cluster registry integrity check succeeded
- Method 2 (Offline)
- In place method when additional storage is not available, but this requires cluster downtime.
- Below the existing mapping from the raw bindings to the block devices, is defined in /etc/sysconfig/rawdevices
- /dev/raw/raw1 /dev/sdb1
- /dev/raw/raw2 /dev/sdc1
- # raw -qa
- /dev/raw/raw1: bound to major 8, minor 17
- /dev/raw/raw2: bound to major 8, minor 33
- # ls /dev/raw/raw*
- crw-r----- 1 root oinstall 162, 1 Jul 24 10:39 /dev/raw/raw1
- crw-r----- 1 root oinstall 162, 2 Jul 24 10:39 /dev/raw/raw2
- # ls -ltra /dev/*
- brw-r----- 1 root oinstall 8, 17 Jul 24 10:39 /dev/sdb1
- brw-r----- 1 root oinstall 8, 33 Jul 24 10:39 /dev/sdc1
- 1. Shutdown Oracle Clusterware on all nodes using "crsctl stop crs" as root.
- 2. On all nodes run the following commands as root:
- # ocrconfig -repair ocr /dev/sdb1
- # ocrconfig -repair ocrmirror /dev/sdc1
- 3. On one node as root run:
- # ocrconfig -overwrite
- In the $CRS_Home/log/<hostname>/client directory there is a trace file from "ocrconfig -overwrite" like ocrconfig_<pid>.log which should exit with status=success like below:
- cat /crs/log/node1/client/ocrconfig_20022.log
- Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
- 2008-08-06 06:41:29.736: [ OCRCONF][3086866112]ocrconfig starts...
- 2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Successfully overwrote OCR configuration on disk
- 2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Exiting [status=success]...
- As a verification step run ocrcheck on all nodes and the Device/File Name should reflect the block devices replacing the raw devices:
- # ocrcheck
- Status of Oracle Cluster Registry is as follows :
- Version : 2
- Total space (kbytes) : 497776
- Used space (kbytes) : 3844
- Available space (kbytes) : 493932
- ID : 576761409
- Device/File Name : /dev/sdb1
- Device/File integrity check succeeded
- Device/File Name : /dev/sdc1
- Device/File integrity check succeeded
- Cluster registry integrity check succeeded
- Example of adding an OCR device file on raw device (Pre 11.2)
- If you have upgraded your environment from a previous version, where you only had one OCR device file, you can use the following step to add an OCRMIRROR file.
- Add /dev/raw/raw2 as OCR mirror device
- # ocrconfig -replace ocrmirror /dev/raw/raw2
- Example of adding/replacing OCR/OCRMIRROR on cluster file system (per 11.2)
- The new OCR/OCRMIRROR file on the cluster filesystem must exist before add/replace can happen. For example, the new OCR and OCRMIRROR will be located under:
- /cluster_fs/OCR/newocr.dat
- /cluster_fs/OCR/newocrm.dat
- As root user:
- # touch /cluster_fs/OCR/newocr.dat
- # touch /cluster_fs/OCR/newocrm.dat
- # chown root:oinstall /cluster_fs/OCR/newocr.dat
- # chown root:oinstall /cluster_fs/OCR/newocrm.dat
- # chmod 640 /cluster_fs/OCR/newocr.dat
- # chmod 640 /cluster_fs/OCR/newocrm.dat
- To add OCRMIRROR:
- # ocrconfig -replace ocrmirror /cluster_fs/OCR/newocrm.dat
- To replace OCR or OCRMIRROR:
- # ocrconfig -replace ocr /cluster_fs/OCR/newocr.dat
- # ocrconfig -replace ocrmirror /cluster_fs/OCR/newocrm.dat
- ADD/DELETE/MOVE Voting Disk
- Note: 1. crsctl votedisk commands must be run as root
- 2 .If the new voting disk is on cluster file system, then it needs to be touched with proper ownership and permission before they can be added.
- 3. If the old voting disk is on cluster file system, it needs to be deleted manually after crsctl delete css votedisk command.
- 4. The voting disk must be owned by the oracle user, in the oinstall group, and must have permissions set to 644. In 10g provide at least 20 MB disk space for the voting disk. In 11g provide at least 280 MB disk space for the voting disk.
- 10.2
- Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
- crsctl query css votedisk
- 1. To add a Voting Disk, provide the full path including file name:
- # crsctl add css votedisk <VOTEDISK_LOCATION> -force
- 2. To delete a Voting Disk, provide the full path including file name:
- # crsctl delete css votedisk <VOTEDISK_LOCATION> -force
- 3. To move a Voting Disk, provide the full path including file name:
- # crsctl add css votedisk <NEW_LOCATION> -force
- # crsctl delete css votedisk <OLD_LOCATION> -force
- After modifying the voting disk, start the Oracle Clusterware stack on all nodes
- # crsctl start crs
- Verify the voting disk location using
- # crsctl query css votedisk
- 11.1
- Starting with 11.1.0.6, the below commands can be performed online.
- 1. To add a Voting Disk, provide the full path including file name:
- # crsctl add css votedisk <VOTEDISK_LOCATION>
- 2. To delete a Voting Disk, provide the full path including file name:
- # crsctl delete css votedisk <VOTEDISK_LOCATION>
- 3. To move a Voting Disk, provide the full path including file name:
- # crsctl add css votedisk <NEW_LOCATION>
- # crsctl delete css votedisk <OLD_LOCATION>
- Verify the voting disk location using:
- # crsctl query css votedisk
- 11.2+:
- From 11.2, votedisk can be stored on either ASM diskgroup or cluster file systems. The following commands can only be executed when GI is running either in cluster mode or exclusive mode. As grid user:
- 1. To add a Voting Disk
- a. When votedisk is on cluster file system:
- $ crsctl add css votedisk <VOTEDISK_LOCATION/filename>
- b. When votedisk is on ASM diskgroup, no add option available. The number of votedisk is determined by the diskgroup redundancy. If more copy of votedisk is desired, one can move votedisk to a diskgroup with higher redundancy.
- 2. To delete a Voting Disk
- a. When votedisk is on cluster file system:
- $ crsctl delete css votedisk <VOTEDISK_LOCATION/filename>
- b. When votedisk is on ASM, no delete option available, one can only replace the existing votedisk group with another ASM diskgroup
- 3. To move a Voting Disk
- a. When votedisk is on cluster file system:
- $ crsctl add css votedisk <NEW VOTEDISK_LOCATION/filename>
- $ crsctl delete css votedisk <OLD VOTEDISK_LOCATION/filename>
- b. When votedisk is on ASM or moving votedisk between cluster file system and ASM diskgroup
- $ crsctl replace votedisk <+diskgroup>|<vdisk>
- eg:
- move from ASM to cluster file system:
- $ crsctl replace votedisk /shared/vote.dat
- Now formatting voting disk: /shared/vote.dat.
- CRS-4256: Updating the profile
- Successful addition of voting disk 32ff90ab38a04f65bf0c428c8fea9721.
- Successful deletion of voting disk 3d34623f09b64f9dbfa44fabf455513e.
- Successful deletion of voting disk 7043c38000a24f1abf36473ca7e9cd9e.
- Successful deletion of voting disk 18de241007df4f9cbf3fbb4193f0ecb4.
- CRS-4256: Updating the profile
- CRS-4266: Voting file(s) successfully replaced
- move from an ASM diskgroup +CRS to ASM diskgroup +OCRVOTE:
- $ crsctl replace votedisk +OCRVOTE
- CRS-4256: Updating the profile
- Successful addition of voting disk 3d34623f09b64f9dbfa44fabf455513e.
- Successful addition of voting disk 7043c38000a24f1abf36473ca7e9cd9e.
- Successful addition of voting disk 18de241007df4f9cbf3fbb4193f0ecb4.
- Successful deletion of voting disk a32c9b158e644fabbfdcc239c76f22a0.
- Successfully replaced voting disk group with +CRS.
- CRS-4256: Updating the profile
- CRS-4266: Voting file(s) successfully replaced
- 4. To verify:
- $ crsctl query css votedisk
- EXAMPLE MOVING VOTING DISK FROM RAW DEVICE to BLOCK DEVICE(Pre 11.2)
- In this example the voting disks will be on the following devices:
- /dev/raw/raw4
- /dev/raw/raw5
- /dev/raw/raw6
- Backup Voting before starting any modification.
- To determine the configured voting devices run "crsctl query css votedisk"
- # crsctl query css votedisk
- 0. 0 /dev/raw/raw4
- 1. 0 /dev/raw/raw5
- 2. 0 /dev/raw/raw6
- located 3 votedisk(s).
- Backup Voting
- Take a backup of all voting disk:
- $ dd if=voting_disk_name of=backup_file_name
- For Windows:
- ocopy \\.\votedsk1 o:\backup\votedsk1.bak
- Moving Voting Device from RAW Device to Block Device
- 1) Run crsctl query css votedisk to determine the current voting disks:
- # crsctl query css votedisk
- 0. 0 /dev/raw/raw4
- 1. 0 /dev/raw/raw5
- 2. 0 /dev/raw/raw6
- located 3 votedisk(s).
- 2) Shutdown Oracle Clusterware on all nodes using "crsctl stop crs" as root.
- Note: This step is only required for 10g CRS. For 11.1 this is an online operation and no cluster outage is required.
- 3) Perform add new voting disk and delete old voting disk. Please note, it is not allowed to delete the last defined voting disk without adding a new one first. Perform the below commands on one node only:
- # crsctl delete css votedisk /dev/raw/raw4 -force
- # crsctl add css votedisk /dev/vote1 -force
- # crsctl delete css votedisk /dev/raw/raw5 -force
- # crsctl delete css votedisk /dev/raw/raw6 -force
- # crsctl add css votedisk /dev/vote2 -force
- # crsctl add css votedisk /dev/vote3 -force
- 4) Verify with crsctl query css votedisk:
- # crsctl query css votedisk
- 0. 0 /dev/vote1
- 1. 0 /dev/vote2
- 2. 0 /dev/vote3
- located 3 votedisk(s).
- 5) After this the Oracle Clusterware stack can be restarted with "crsctl start crs" as root. (Only required for 10g CRS)
- Monitoring the cluster_alert.log in $CRS_HOME/log/<hostname>/alertnode.log the new configured voting disks should be online:
- 2008-08-06 07:41:55.029
- [cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote1. Details in /crs/log/node1/cssd/ocssd.log.
- 2008-08-06 07:41:55.038
- [cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote2. Details in /crs/log/node1/cssd/ocssd.log.
- 2008-08-06 07:41:55.058
- [cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote3. Details in /crs/log/node1/cssd/ocssd.log.
- [cssd(31750)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .
- References
- NOTE:1062983.1 - How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems
- NOTE:390880.1 - OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack is running
- NOTE:866102.1 - Renaming OCR Using "ocrconfig -overwrite" Fails
复制代码 |
|