- 最后登录
- 2014-8-12
- 在线时间
- 8 小时
- 威望
- -4
- 金钱
- 57
- 注册时间
- 2012-12-6
- 阅读权限
- 0
- 帖子
- 10
- 精华
- 0
- 积分
- -4
- UID
- 782
|
1#
发表于 2014-1-16 16:42:57
|
查看: 6500 |
回复: 8
上次发帖错误,提问的方式有些错误不好意思,幸好相兵留言让我注意。
这几天有重新装了一下,还是有问题。现在把问题发出来,大家给看看,指点一下。
======================环境如下===========================
数据库软件包 ORACLE 11G 11.2.3
ASM 软件包 ORACLE Grid Infrastructure 11.2.0.1
操作系统 Cenos 64bit
安装环境 虚拟机
=======================================================
==================两个点的HOST文件配置如下=======================
# eth0 - PUBLIC
192.168.0.3 rac1.example.com rac1
192.168.0.11 rac2.example.com rac2
# VIP
192.168.0.15 rac1-vip.example.com rac1-vip
192.168.0.16 rac2-vip.example.com rac2-vip
# eth1 - PRIVATE
172.0.2.100 rac1-pvt
172.0.2.101 rac2-pvt
#scan ip
192.168.0.215 rac-scan.example.com rac-scan
===========================================================
=========================执行root.sh================================
安装时一切正常,到最后在RAC2 节点上执行root.sh 文件的时候报错
RAC1 执行root.sh 如下
[root@rac1 ~]# /u01/app/grid/11.2.0/root.sh
。。。。。
此处省略
。。。。。
rac1 2014/01/16 15:56:58 /u01/app/grid/11.2.0/cdata/rac1/backup_20140116_155658.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 9214 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
执行RAC2 时报错~错误如下
[root@rac2 home]# /u01/app/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2014-01-16 16:04:43: Parsing the host name
2014-01-16 16:04:43: Checking for super user privileges
2014-01-16 16:04:43: User has super user privileges
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15003: diskgroup "DATA" already mounted in another lock name space
Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
Initial cluster configuration failed. See /u01/app/grid/11.2.0/cfgtoollogs/crsconfig/rootcrs_rac2.log for details
You have new mail in /var/spool/mail/root
--------------------------------------------------查询alert日志------------------------------------------------------------------
[root@rac2 11.2.0]# tail -n200 log/rac2/alertrac2.log
Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
2014-01-16 16:04:53.126
[client(10833)]CRS-2106:The OLR location /u01/app/grid/11.2.0/cdata/rac2.olr is inaccessible. Details in /u01/app/grid/11.2.0/log/rac2/client/ocrconfig_10833.log.
2014-01-16 16:04:53.293
[client(10833)]CRS-2101:The OLR was formatted using version 3.
2014-01-16 16:05:06.710
[ohasd(10874)]CRS-2112:The OLR service started on node rac2.
2014-01-16 16:05:08.145
[ohasd(10874)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
[client(11001)]CRS-10001:ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1
2014-01-16 16:05:35.968
[ohasd(10874)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2014-01-16 16:10:07.965
[cssd(11126)]CRS-1713:CSSD daemon is started in exclusive mode
2014-01-16 16:10:56.703
[cssd(11126)]CRS-1709:Lease acquisition failed for node rac2 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/app/grid/11.2.0/log/rac2/cssd/ocssd.log
2014-01-16 16:13:12.591
[cssd(11126)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
2014-01-16 16:13:26.127
[ctssd(11184)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
2014-01-16 16:14:16.663
[ctssd(11184)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
2014-01-16 16:19:55.557
[ctssd(11184)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
2014-01-16 16:20:04.379
[cssd(11126)]CRS-1603:CSSD on node rac2 shutdown by user.
============================以上是所有的报错=================================
=========================查询网卡================================
---RAC1
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:48:7a:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.15/24 brd 192.168.0.255 scope global secondary eth0:1
inet 192.168.0.215/24 brd 192.168.0.255 scope global secondary eth0:2
inet6 fe80::20c:29ff:fe48:7aca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:48:7a:d4 brd ff:ff:ff:ff:ff:ff
inet 172.0.2.100/24 brd 172.0.2.255 scope global eth1
inet6 fe80::20c:29ff:fe48:7ad4/64 scope link
valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
--在RAC1节点所有浮动IP都已经生效
---RAC2
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:07:17:9e brd ff:ff:ff:ff:ff:ff
inet 192.168.0.11/24 brd 192.168.0.255 scope global eth0
inet6 fe80::20c:29ff:fe07:179e/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:07:17:a8 brd ff:ff:ff:ff:ff:ff
inet 172.0.2.101/24 brd 172.0.2.255 scope global eth1
inet6 fe80::20c:29ff:fe07:17a8/64 scope link
valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
You have new mail in /var/spool/mail/root
--RAC2 节点只有第一块网卡的IP
我怀疑是RAC2 root.sh 脚本没有执行成功的原因
本人第一次装ORACLE11G 的RAC 安装官方文档 一步步安装,装了好几次,还是报错,网站找了好久,也没具体解决办法,求助大牛,能否给点线索或提示。谢谢
====================
最后请 相兵 帮我把上次发错的帖子删掉
http://t.askmaclean.com/thread-3827-1-1.html
==================== |
|