Oracle数据库数据恢复、性能优化

找回密码
注册
搜索
热搜: 活动 交友 discuz
发新帖

-4

积分

0

好友

3

主题
1#
发表于 2014-1-16 16:42:57 | 查看: 6499| 回复: 8
上次发帖错误,提问的方式有些错误不好意思,幸好相兵留言让我注意。

这几天有重新装了一下,还是有问题。现在把问题发出来,大家给看看,指点一下。

======================环境如下===========================
数据库软件包 ORACLE 11G 11.2.3
ASM 软件包    ORACLE Grid Infrastructure 11.2.0.1
操作系统        Cenos 64bit
安装环境        虚拟机
=======================================================

==================两个点的HOST文件配置如下=======================
# eth0 - PUBLIC
192.168.0.3     rac1.example.com rac1
192.168.0.11    rac2.example.com rac2

# VIP
192.168.0.15    rac1-vip.example.com rac1-vip
192.168.0.16    rac2-vip.example.com rac2-vip

# eth1 - PRIVATE
172.0.2.100     rac1-pvt
172.0.2.101     rac2-pvt

#scan ip
192.168.0.215   rac-scan.example.com rac-scan
===========================================================

=========================执行root.sh================================
安装时一切正常,到最后在RAC2 节点上执行root.sh 文件的时候报错
RAC1 执行root.sh 如下
[root@rac1 ~]# /u01/app/grid/11.2.0/root.sh
。。。。。
此处省略
。。。。。
rac1     2014/01/16 15:56:58     /u01/app/grid/11.2.0/cdata/rac1/backup_20140116_155658.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 9214 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


执行RAC2 时报错~错误如下
[root@rac2 home]# /u01/app/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2014-01-16 16:04:43: Parsing the host name
2014-01-16 16:04:43: Checking for super user privileges
2014-01-16 16:04:43: User has super user privileges
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1



CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded

DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15003: diskgroup "DATA" already mounted in another lock name space


Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD

CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
Initial cluster configuration failed.  See /u01/app/grid/11.2.0/cfgtoollogs/crsconfig/rootcrs_rac2.log for details
You have new mail in /var/spool/mail/root

--------------------------------------------------查询alert日志------------------------------------------------------------------
[root@rac2 11.2.0]# tail -n200 log/rac2/alertrac2.log  
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2014-01-16 16:04:53.126
[client(10833)]CRS-2106:The OLR location /u01/app/grid/11.2.0/cdata/rac2.olr is inaccessible. Details in /u01/app/grid/11.2.0/log/rac2/client/ocrconfig_10833.log.
2014-01-16 16:04:53.293
[client(10833)]CRS-2101:The OLR was formatted using version 3.
2014-01-16 16:05:06.710
[ohasd(10874)]CRS-2112:The OLR service started on node rac2.
2014-01-16 16:05:08.145
[ohasd(10874)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
[client(11001)]CRS-10001:ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

2014-01-16 16:05:35.968
[ohasd(10874)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2014-01-16 16:10:07.965
[cssd(11126)]CRS-1713:CSSD daemon is started in exclusive mode
2014-01-16 16:10:56.703
[cssd(11126)]CRS-1709:Lease acquisition failed for node rac2 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/app/grid/11.2.0/log/rac2/cssd/ocssd.log
2014-01-16 16:13:12.591
[cssd(11126)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
2014-01-16 16:13:26.127
[ctssd(11184)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
2014-01-16 16:14:16.663
[ctssd(11184)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
2014-01-16 16:19:55.557
[ctssd(11184)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
2014-01-16 16:20:04.379
[cssd(11126)]CRS-1603:CSSD on node rac2 shutdown by user.

============================以上是所有的报错=================================

=========================查询网卡================================
---RAC1
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:48:7a:ca brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
    inet 192.168.0.15/24 brd 192.168.0.255 scope global secondary eth0:1
    inet 192.168.0.215/24 brd 192.168.0.255 scope global secondary eth0:2
    inet6 fe80::20c:29ff:fe48:7aca/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:48:7a:d4 brd ff:ff:ff:ff:ff:ff
    inet 172.0.2.100/24 brd 172.0.2.255 scope global eth1
    inet6 fe80::20c:29ff:fe48:7ad4/64 scope link
       valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0

--在RAC1节点所有浮动IP都已经生效

---RAC2
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:07:17:9e brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.11/24 brd 192.168.0.255 scope global eth0
    inet6 fe80::20c:29ff:fe07:179e/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:0c:29:07:17:a8 brd ff:ff:ff:ff:ff:ff
    inet 172.0.2.101/24 brd 172.0.2.255 scope global eth1
    inet6 fe80::20c:29ff:fe07:17a8/64 scope link
       valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
You have new mail in /var/spool/mail/root

--RAC2 节点只有第一块网卡的IP
我怀疑是RAC2 root.sh 脚本没有执行成功的原因

本人第一次装ORACLE11G 的RAC 安装官方文档 一步步安装,装了好几次,还是报错,网站找了好久,也没具体解决办法,求助大牛,能否给点线索或提示。谢谢

====================
最后请 相兵 帮我把上次发错的帖子删掉
http://t.askmaclean.com/thread-3827-1-1.html
====================
2#
发表于 2014-1-16 17:07:00
请阐明 为什么一定要装 11.2.0.1 ? 不直接装 11.2.0.3 or 11.2.0.4?

回复 只看该作者 道具 举报

3#
发表于 2014-1-20 08:45:50
这几天都在忙机房,没来的及回复~~
是这样的,主要当时手上有个现成的 11.2.0.1 Grid的安装包,结果就用11.2.0.1 安装了
今天在下 11.2.0.3 后面准备用这个版本再装一遍。

想问下刘大,是不是11.2.0.1 的版本装的时候会出现问题啊?

回复 只看该作者 道具 举报

4#
发表于 2014-2-5 17:10:46
yzm1987 发表于 2014-1-20 08:45
这几天都在忙机房,没来的及回复~~
是这样的,主要当时手上有个现成的 11.2.0.1 Grid的安装包,结果就用11. ...

11.2 以后除非你为了测试11.2.0.1 上的bug ,否则没理由去装这个base release了

回复 只看该作者 道具 举报

5#
发表于 2014-2-8 14:56:58
ALLSTARS_ORACLE 发表于 2014-2-5 17:10
11.2 以后除非你为了测试11.2.0.1 上的bug ,否则没理由去装这个base release了

刘大,个人感觉问题不应该归咎于楼主装的是什么版本,无论是.1的base release也好还是.2 .3或是.4也好,不都是Oracle官方放出来的正式版本吗?既然是正式版本,纵然bug很多,也不至于连安装都有问题,Oracle在放出来之前肯定会有N多测试的,至少应该会测试到安装没有问题才会放出来的吧?

回复 只看该作者 道具 举报

6#
发表于 2014-2-8 16:01:50
goodwzb 发表于 2014-2-8 14:56
刘大,个人感觉问题不应该归咎于楼主装的是什么版本,无论是.1的base release也好还是.2 .3或是.4也好, ...

这个 老实说 我们原厂的ACS去装RAC ,标准是3~5天装一套

但是因为 OS+其他一些依赖条件都不是去的工程师搞, 所以往往装一个礼拜 也装不上的也有这样的情况,这种情况一般就会被用户投诉了。

特别是例如 10.2.0.1 11.2.0.1 这些base release 装不上太正常了,可以说很多环境 就是装不上的,因为容错性太差,安装bug太多。

11gR2之前由于必须先安装 base release,所以折腾在所难免; 11gR2之后为什么不去体验安装的最佳实践,而耗费时间、经历折腾自己呢?

至于你说的测试, 老实说这些测试的范围和时间实在有限。

回复 只看该作者 道具 举报

7#
发表于 2014-2-8 16:10:04
能手工启动asm实例么 rac2上?

回复 只看该作者 道具 举报

8#
发表于 2014-2-8 16:59:03
Maclean Liu(刘相兵 发表于 2014-2-8 16:01
这个 老实说 我们原厂的ACS去装RAC ,标准是3~5天装一套

但是因为 OS+其他一些依赖条件都不是去的工程师 ...

刘大所言极是,难怪无论什么软件,我老板一看见.1的base release就跟见了鬼一样,看来你们都是经验之人,这类灾难,见得实在太多了……

回复 只看该作者 道具 举报

9#
发表于 2014-2-9 11:19:59
我就在这个版本上装了,出现很多问题,我测试时就出现,所以下了后面的版本。

http://pan.baidu.com/s/1jGqO9pk

赶快下。

回复 只看该作者 道具 举报

您需要登录后才可以回帖 登录 | 注册

QQ|手机版|Archiver|Oracle数据库数据恢复、性能优化

GMT+8, 2024-12-21 06:34 , Processed in 0.083529 second(s), 21 queries .

Powered by Discuz! X2.5

© 2001-2012 Comsenz Inc.

回顶部
TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569