Oracle Exadata Assessment Report
|
Cluster Name | dm01-cluster |
OS Version | LINUX X86-64 OELRHEL 5 2.6.32-400.1.1.el5uek |
CRS Home - Version | /u01/app/11.2.0.3/grid - 11.2.0.3.0 |
DB Home - Version - Names | /u01/app/oracle/product/11.2.0.3/dbhome_1 - 11.2.0.3.0 - MCSDB |
Exadata Version | 11.2.3.2.0 |
Number of nodes | 7 |
Database Servers | 2 |
Storage Servers | 3 |
IB Switches | 2 |
exachk Version | 2.1.5_20120524 |
Collection | exachk_MCSDB_041614_105402.zip |
Collection Date | 16-Apr-2014 11:03:16 |
Removing findings in page does not change the original html file. Please use browsers save page button (or press Ctrl+S) to save the report.
FAIL, WARNING, ERROR and INFO findings should be evaluated. INFO status is considered a significant finding and details for those should be reviewed in light of your environment.
Status | Type | Message | Status On | Details |
---|---|---|---|---|
FAIL | OS Check | Database control files are not configured as recommended | All Database Servers | View |
FAIL | Patch Check | System may be exposed to Exadata Critical Issue DB11 | All Homes | View |
FAIL | OS Check | Database Server Physical Drive Configuration does not meet recommendation | All Database Servers | View |
FAIL | SQL Parameter Check | Database parameter USE_LARGE_PAGES is NOT set to recommended value | All Instances | View |
FAIL | SQL Parameter Check | Database parameter GLOBAL_NAMES is NOT set to recommended value | All Instances | View |
FAIL | OS Check | InfiniBand network error counters are non-zero | All Database Servers | View |
FAIL | SQL Check | Some data or temp files are not autoextensible | All Databases | View |
FAIL | SQL Parameter Check | Database parameter _lm_rcvr_hang_allow_time is NOT set to the recommended value | All Instances | View |
FAIL | SQL Parameter Check | Database parameter _kill_diagnostics_timeout is not set to recommended value | All Instances | View |
WARNING | OS Check | All voting disks are not online | All Database Servers | View |
WARNING | SQL Check | Some tablespaces are not using Automatic segment storage management. | All Databases | View |
INFO | OS Check | ASM griddisk,diskgroup and Failure group mapping not checked. | All Database Servers | View |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
FAIL | Storage Server Check | The griddisk ASM status should match specification | dm01cel01 | View |
FAIL | Storage Server Check | The celldisk configuration on disk drives should match Oracle best practices | dm01cel01 | View |
FAIL | Storage Server Check | one or storage server has open critical alerts. | All Storage Servers | View |
FAIL | Storage Server Check | Storage Server alerts are not configured to be sent via email | All Storage Servers | View |
WARNING | Storage Server Check | Free space in root(/) filesystem is less than recommended on one or more storage servers. | All Storage Servers | View |
Outage Type | Status | Type | Message | Status On | Details |
---|---|---|---|---|---|
COMPUTER FAILURE PREVENTION BEST PRACTICES | PASS | Description | |||
PASS | SQL Parameter Check | fast_start_mttr_target has been changed from default | All Instances | View | |
STORAGE FAILURES PREVENTION BEST PRACTICES | PASS | Description | |||
PASS | SQL Check | At least one high redundancy diskgroup configured | All Databases | View | |
DATA CORRUPTION PREVENTION BEST PRACTICES | FAIL | Description | |||
FAIL | SQL Parameter Check | Database parameter DB_BLOCK_CHECKSUM is NOT set to recommended value | All Instances | View | |
WARNING | OS Check | Database parameter DB_BLOCK_CHECKING is NOT set to the recommended value. | All Database Servers | View | |
WARNING | OS Check | Database parameter DB_BLOCK_CHECKING is NOT set to the recommended value. | All Database Servers | View | |
PASS | SQL Parameter Check | Database parameter DB_LOST_WRITE_PROTECT is set to recommended value | All Instances | View | |
PASS | OS Check | Shell limit soft nofile for DB is configured according to recommendation | All Database Servers | View | |
LOGICAL CORRUPTION PREVENTION BEST PRACTICES | FAIL | Description | |||
FAIL | SQL Check | Flashback is not configured | All Databases | View | |
PASS | SQL Parameter Check | Database parameter UNDO_RETENTION is not null | All Instances | View | |
DATABASE/CLUSTER/SITE FAILURE PREVENTION BEST PRACTICES | FAIL | Description
| |||
FAIL | OS Check | Oracle Net service name to ship redo to the standby is not configured properly | All Database Servers | View | |
FAIL | SQL Check | Remote destination not is using either ASYNC or SYNC transport for redo transport | All Databases | View | |
FAIL | SQL Check | Standby is not running in MANAGED REAL TIME APPLY mode | All Databases | View | |
FAIL | SQL Check | Standby redo logs are not configured on both sites | All Databases | View | |
FAIL | SQL Check | Physical standby status is not valid | All Databases | View | |
WARNING | SQL Check | Logical standby unsupported datatypes found | All Databases | View | |
PASS | SQL Check | Database parameter LOG_FILE_NAME_CONVERT or DB_CREATE_ONLINE_LOG_DEST_1 is not null | All Databases | View | |
NETWORK FAILURE PREVENTION BEST PRACTICES | INFO | Description | |||
CLIENT FAILOVER OPERATIONAL BEST PRACTICES | FAIL | Description | |||
FAIL | OS Check | Dataguard broker configuration does not exist | All Database Servers | View | |
PASS | OS Check | Clusterware is running | All Database Servers | View | |
OPERATIONAL BEST PRACTICES | INFO | Description | |||
CONSOLIDATION DATABASE PRACTICES | INFO | Description |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
PASS | ASM Check | ASM processes parameter is set to recommended value | All ASM Instances | View |
PASS | SQL Parameter Check | RECYCLEBIN is set to the recommended value | All Instances | View |
PASS | SQL Parameter Check | ASM parameter ASM_POWER_LIMIT is set to the default value. | All Instances | View |
PASS | OS Check | DNS Server ping time is in acceptable range | All Database Servers | View |
PASS | OS Check | Database Server Disk Controller Configuration meets recommendation | All Database Servers | View |
PASS | SQL Parameter Check | ASM parameter MEMORY_MAX_TARGET is set according to recommended value | All Instances | View |
PASS | SQL Parameter Check | ASM parameter PGA_AGGREGATE_TARGET is set according to recommended value | All Instances | View |
PASS | SQL Parameter Check | ASM parameter MEMORY_TARGET is set according to recommended value | All Instances | View |
PASS | SQL Parameter Check | ASM parameter SGA_TARGET is set according to recommended value. | All Instances | View |
PASS | SQL Check | All bigfile tablespaces have non-default maxbytes values set | All Databases | View |
PASS | OS Check | subnet manager is running on an InfiniBand switch | All Database Servers | View |
PASS | OS Check | Address Resolution Protocol (ARP) is configured properly on database server. | All Database Servers | View |
PASS | OS Check | Only one non-ASM instance discovered | All Database Servers | View |
PASS | OS Check | Database parameter Db_create_online_log_dest_n is set to recommended value | All Database Servers | View |
PASS | OS Check | Database parameters log_archive_dest_n with Location attribute are all set to recommended value | All Database Servers | View |
PASS | OS Check | Database parameter COMPATIBLE is set to recommended value | All Database Servers | View |
PASS | OS Check | All Ethernet network cables are connected | All Database Servers | View |
PASS | OS Check | All InfiniBand network cables are connected | All Database Servers | View |
PASS | OS Check | Database parameter db_recovery_file_dest_size is set to recommended value | All Database Servers | View |
PASS | OS Check | Database DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST are in different diskgroups | All Database Servers | View |
PASS | OS Check | Database parameter CLUSTER_INTERCONNECTS is set to the recommended value | All Database Servers | View |
PASS | ASM Check | ASM parameter CLUSTER_INTERCONNECTS is set to the recommended value | All ASM Instances | View |
PASS | SQL Parameter Check | Database parameter PARALLEL_EXECUTION_MESSAGE_SIZE is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter SQL92_SECURITY is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter OPEN_CURSORS is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter OS_AUTHENT_PREFIX is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter PARALLEL_THREADS_PER_CPU is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter _ENABLE_NUMA_SUPPORT is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter PARALLEL_ADAPTIVE_MULTI_USER is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter _file_size_increase_increment is set to the recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter LOG_BUFFER is set to recommended value | All Instances | View |
PASS | OS Check | Disk cache policy is set to Disabled on database server | All Database Servers | View |
PASS | OS Check | Exadata software version supports Automatic Service Request functionality | All Database Servers | View |
PASS | OS Check | Oracle ASM Communication is using RDS protocol on Infiniband Network | All Database Servers | View |
PASS | OS Check | Database server disk controllers use writeback cache | All Database Servers | View |
PASS | OS Check | Verify-topology executes without any errors or warning | All Database Servers | View |
PASS | OS Check | Hardware and firmware profile check is successful. [Database Server] | All Database Servers | View |
PASS | OS Check | Local listener init parameter is set to local node VIP | All Database Servers | View |
PASS | OS Check | ohasd/orarootagent_root Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | Remote listener is set to SCAN name | All Database Servers | View |
PASS | OS Check | NTP is running with correct setting | All Database Servers | View |
PASS | OS Check | Interconnect is configured on non-routable network addresses | All Database Servers | View |
PASS | SQL Check | SYS.IDGEN1$ sequence cache size >= 1,000 | All Databases | View |
PASS | OS Check | crsd Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | OSWatcher is running | All Database Servers | View |
PASS | OS Check | ORA_CRS_HOME environment variable is not set | All Database Servers | View |
PASS | SQL Check | GC blocks lost is not occurring | All Databases | View |
PASS | OS Check | NIC bonding mode is not set to Broadcast(3) for cluster interconnect | All Database Servers | View |
PASS | SQL Check | SYS.AUDSES$ sequence cache size >= 10,000 | All Databases | View |
PASS | OS Check | SELinux is not being Enforced. | All Database Servers | View |
PASS | OS Check | crsd/orarootagent_root Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | $ORACLE_HOME/bin/oradism ownership is root | All Database Servers | View |
PASS | OS Check | NIC bonding is configured for interconnect | All Database Servers | View |
PASS | OS Check | $ORACLE_HOME/bin/oradism setuid bit is set | All Database Servers | View |
PASS | OS Check | ohasd Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | NIC bonding is configured for public network (VIP) | All Database Servers | View |
PASS | OS Check | Open files limit (ulimit -n) for current user is set to recommended value >= 65536 or unlimited | All Database Servers | View |
PASS | OS Check | NIC bonding mode is not set to Broadcast(3) for public network | All Database Servers | View |
PASS | OS Check | Shell limit hard stack for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit soft nproc for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard stack for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nofile for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nproc for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nproc for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nofile for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit soft nproc for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit soft nofile for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Management network is separate from data network | All Database Servers | View |
PASS | OS Check | RAID controller battery temperature is normal [Database Server] | All Database Servers | View |
PASS | ASM Check | ASM Audit file destination file count <= 100,000 | All ASM Instances | View |
PASS | OS Check | Database Server Virtual Drive Configuration meets recommendation | All Database Servers | View |
PASS | ASM Check | Correct number of FailGroups per ASM DiskGroup are configured | All ASM Instances | View |
PASS | OS Check | System model number is correct | All Database Servers | View |
PASS | OS Check | Number of Mounts before a File System check is set to -1 for system disk | All Database Servers | View |
PASS | OS Check | Free space in root(/) filesystem meets or exceeds recommendation. | All Database Servers | View |
PASS | OS Check | Oracle RAC Communication is using RDS protocol on Infiniband Network | All Database Servers | View |
PASS | OS Check | Database Home is properly linked with RDS library | All Database Servers | View |
PASS | OS Check | InfiniBand is the Private Network for Oracle Clusterware Communication | All Database Servers | View |
PASS | SQL Check | RDBMS Version is 11.2.0.2 or higher as expected | All Databases | View |
PASS | SQL Check | All tablespaces are locally manged tablespace | All Databases | View |
PASS | ASM Check | ASM Version is 11.2.0.2 or higher as expected | All ASM Instances | View |
PASS | OS Check | NUMA is OFF at operating system level. | All Database Servers | View |
PASS | OS Check | Database Server InfiniBand network MTU size is 65520 | All Database Servers | View |
PASS | OS Check | Clusterware Home is properly linked with RDS library | All Database Servers | View |
PASS | OS Check | CSS misscount is set to the recommended value of 60 | All Database Servers | View |
PASS | OS Check | Database server InfiniBand network is in "connected" mode. | All Database Servers | View |
PASS | ASM Check | All disk groups have compatible.asm parameter set to recommended values | All ASM Instances | View |
PASS | ASM Check | All disk groups have CELL.SMART_SCAN_CAPABLE parameter set to true | All ASM Instances | View |
PASS | ASM Check | All disk groups have compatible.rdbms parameter set to recommended values | All ASM Instances | View |
PASS | ASM Check | All disk groups have allocation unit size set to 4MB | All ASM Instances | View |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
PASS | Storage Server Check | The celldisk configuration on flash memory devices matches Oracle best practices | All Storage Servers | View |
PASS | Storage Server Check | The griddisk count matches across all storage servers where a given prefix name exists | All Storage Servers | View |
PASS | Storage Server Check | The total number of griddisks with a given prefix name is evenly divisible by the number of celldisks | All Storage Servers | View |
PASS | Storage Server Check | The total size of all griddisks fully utilizes celldisk capacity | All Storage Servers | View |
PASS | Storage Server Check | DNS Server ping time is in acceptable range | All Storage Servers | View |
PASS | Storage Server Check | Smart flash log is created on all storage server | All Storage Servers | View |
PASS | Storage Server Check | Storage Server Flash Memory is configured as Exadata Smart Flash Cache | All Storage Servers | View |
PASS | Storage Server Check | Peripheral component interconnect (PCI) bridge is configured for generation II on all storage servers | All Storage Servers | View |
PASS | Storage Server Check | There are no griddisks configured on flash memory devices | All Storage Servers | View |
PASS | Storage Server Check | No Storage Server conventional or flash disks have a performance problem | All Storage Servers | View |
PASS | Storage Server Check | All InfiniBand network cables are connected on all Storage Servers | All Storage Servers | View |
PASS | Storage Server Check | All Ethernet network cables are connected on all Storage Servers | All Storage Servers | View |
PASS | Storage Server Check | Disk cache policy is set to Disabled on all storage server | All Storage Servers | View |
PASS | Storage Server Check | Electronic Storage Module (ESM) Lifetime is within specification for all flash cards on all storage servers | All Storage Servers | View |
PASS | Storage Server Check | Management network is separate from data network on all storage servers | All Storage Servers | View |
PASS | Storage Server Check | Ambient temperature is within the recommended range. | All Storage Servers | View |
PASS | Storage Server Check | Software profile check is successful on all storage servers. | All Storage Servers | View |
PASS | Storage Server Check | Hardware and firmware profile check is successful on all storage servers. | All Storage Servers | View |
PASS | Storage Server Check | OSWatcher is running on all storage servers | All Storage Servers | View |
PASS | Storage Server Check | RAID controller battery temperature is normal [Storage Server] | All Storage Servers | View |
PASS | Storage Server Check | All Exadata storage server meet system model number requirement | All Storage Servers | View |
PASS | Storage Server Check | All storage server disk controllers use writeback cache | All Storage Servers | View |
PASS | Storage Server Check | No celldisks have status of predictive failure | All Storage Servers | View |
PASS | Storage Server Check | RAID controller version matches on all storage servers | All Storage Servers | View |
PASS | Storage Server Check | No Storage Server Memory (ECC) Errors found. | All Storage Servers | View |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
PASS | Cluster Wide Check | RDBMS home /u01/app/oracle/product/11.2.0.3/dbhome_1 has same number of patches installed across the cluster | Cluster Wide | - |
PASS | Cluster Wide Check | Clusterware active version matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Grid Infrastructure software owner UID matches across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Timezone matches for current user across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Private interconnect interface names are the same across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | RDBMS software version matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Public network interface names are the same across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | OS Kernel version(uname -r) matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | RDBMS software owner UID matches across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Time zone matches for Grid Infrastructure software owner across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Time zone matches for root user across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Master (Rack) Serial Number matches across database servers and storage servers | Cluster Wide | View |
Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. exachk assesses them and calls attention to any findings.
Status on Cluster Wide: PASS => Clusterware active version matches across cluster. |
dm01db01 = 112030 dm01db02 = 112030 |
Status on Cluster Wide: PASS => Grid Infrastructure software owner UID matches across cluster |
dm01db01 = 1000 dm01db02 = 1000 |
Status on Cluster Wide: PASS => Timezone matches for current user across cluster. |
dm01db01 = CST dm01db02 = CST |
Status on Cluster Wide: PASS => Private interconnect interface names are the same across cluster |
dm01db01 = bondib0 dm01db02 = bondib0 |
Status on Cluster Wide: PASS => RDBMS software version matches across cluster. |
dm01db01 = 112030 dm01db02 = 112030 |
Status on Cluster Wide: PASS => Public network interface names are the same across cluster |
dm01db01 = bondeth0 dm01db02 = bondeth0 |
Status on Cluster Wide: PASS => OS Kernel version(uname -r) matches across cluster. |
dm01db01 = 2632-40011el5uek dm01db02 = 2632-40011el5uek |
Status on Cluster Wide: PASS => RDBMS software owner UID matches across cluster |
dm01db01 = 1001 dm01db02 = 1001 |
Status on Cluster Wide: PASS => Time zone matches for Grid Infrastructure software owner across cluster |
dm01db01 = CST dm01db02 = CST |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | |
Needs attention on | - |
Passed on | Cluster Wide |
Status on Cluster Wide: PASS => Time zone matches for root user across cluster |
dm01db01 = CST dm01db02 = CST |
Status on dm01db01:/u01/app/oracle/product/11.2.0.3/dbhome_1: FAIL => System may be exposed to Exadata Critical Issue DB11 |
Oracle Interim Patch Installer version 11.2.0.3.0 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/11.2.0.3/dbhome_1 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/11.2.0.3/dbhome_1/oraInst.loc OPatch version : 11.2.0.3.0 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/opatch2014-04-16_11-03-29AM_1.log Lsinventory Output file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-04-16_11-03-29AM.txt ------------------------------------------------------------------------------------------------------ Installed Top-level Products (1): Oracle Database 11g 11.2.0.3.0 There are 1 products installed in this Oracle Home. |
Status on dm01db02:/u01/app/oracle/product/11.2.0.3/dbhome_1: FAIL => System may be exposed to Exadata Critical Issue DB11 |
Oracle Interim Patch Installer version 11.2.0.3.0 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/11.2.0.3/dbhome_1 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/11.2.0.3/dbhome_1/oraInst.loc OPatch version : 11.2.0.3.0 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/opatch2014-04-16_11-17-07AM_1.log Lsinventory Output file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-04-16_11-17-07AM.txt ------------------------------------------------------------------------------------------------------ Installed Top-level Products (1): Oracle Database 11g 11.2.0.3.0 There are 1 products installed in this Oracle Home. |
Status on MCSDB1: PASS => RECYCLEBIN is set to the recommended value |
MCSDB1.recyclebin = on |
Status on MCSDB2: PASS => RECYCLEBIN is set to the recommended value |
MCSDB2.recyclebin = on |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => The celldisk configuration on flash memory devices matches Oracle best practices |
DATA FROM DM01CEL01 FOR VERIFY CELLDISK CONFIGURATION ON FLASH MEMORY DEVICES name: FD_00_dm01cel01 comment: creationTime: 2012-11-28T10:30:51+08:00 deviceName: /dev/sdr devicePartition: /dev/sdr diskType: FlashDisk errorCount: 0 freeSpace: 0 id: 6ba2320c-94d9-4553-bb70-6dc56de0afba interleaving: none lun: 1_0 physicalDisk: 1112M07TJA size: 22.875G status: normal name: FD_01_dm01cel01 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => The griddisk count matches across all storage servers where a given prefix name exists |
DATA FROM DM01CEL01 FOR VERIFY GRIDDISK COUNT MATCHES ACROSS ALL STORAGE SERVERS WHERE A GIVEN PREFIX NAME EXISTS DATA_DM01_CD_00_dm01cel01 active DATA_DM01_CD_01_dm01cel01 active DATA_DM01_CD_02_dm01cel01 not present DATA_DM01_CD_03_dm01cel01 active DATA_DM01_CD_04_dm01cel01 active DATA_DM01_CD_05_dm01cel01 active DATA_DM01_CD_06_dm01cel01 active DATA_DM01_CD_07_dm01cel01 active DATA_DM01_CD_08_dm01cel01 active DATA_DM01_CD_09_dm01cel01 active DATA_DM01_CD_10_dm01cel01 active DATA_DM01_CD_11_dm01cel01 active DBFS_DG_CD_02_dm01cel01 not present DBFS_DG_CD_03_dm01cel01 active DBFS_DG_CD_04_dm01cel01 active |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => The total number of griddisks with a given prefix name is evenly divisible by the number of celldisks |
DATA FROM DM01CEL01 FOR VERIFY TOTAL NUMBER OF GRIDDISKS WITH A GIVEN PREFIX NAME IS EVENLY DIVISIBLE OF CELLDISKS DATA_DM01: SUCCESS RECO_DM01: SUCCESS DATA FROM DM01CEL02 FOR VERIFY TOTAL NUMBER OF GRIDDISKS WITH A GIVEN PREFIX NAME IS EVENLY DIVISIBLE OF CELLDISKS DATA_DM01: SUCCESS RECO_DM01: SUCCESS |
Status on dm01cel03, dm01cel02: PASS |
DATA FROM DM01CEL02 FOR VERIFY GRIDDISK ASM STATUS DATA_DM01_CD_00_dm01cel02 active ONLINE Yes DATA_DM01_CD_01_dm01cel02 active ONLINE Yes DATA_DM01_CD_02_dm01cel02 active ONLINE Yes DATA_DM01_CD_03_dm01cel02 active ONLINE Yes DATA_DM01_CD_04_dm01cel02 active ONLINE Yes DATA_DM01_CD_05_dm01cel02 active ONLINE Yes DATA_DM01_CD_06_dm01cel02 active ONLINE Yes DATA_DM01_CD_07_dm01cel02 active ONLINE Yes DATA_DM01_CD_08_dm01cel02 active ONLINE Yes DATA_DM01_CD_09_dm01cel02 active ONLINE Yes DATA_DM01_CD_10_dm01cel02 active ONLINE Yes DATA_DM01_CD_11_dm01cel02 active ONLINE Yes DBFS_DG_CD_02_dm01cel02 active ONLINE Yes DBFS_DG_CD_03_dm01cel02 active ONLINE Yes DBFS_DG_CD_04_dm01cel02 active ONLINE Yes DBFS_DG_CD_05_dm01cel02 active ONLINE Yes |
Status on dm01cel01: FAIL => The celldisk configuration on disk drives should match Oracle best practices |
DATA FROM DM01CEL01 FOR VERIFY CELLDISK CONFIGURATION ON DISK DRIVES name: CD_00_dm01cel01 comment: creationTime: 2012-11-28T10:30:40+08:00 deviceName: /dev/sda devicePartition: /dev/sda3 diskType: HardDisk errorCount: 0 freeSpace: 0 id: 5e35f429-e6b8-422a-bde4-2705c82fe9bc interleaving: none lun: 0_0 physicalDisk: L45WSN raidLevel: 0 size: 1832.59375G status: normal |
Status on dm01cel03, dm01cel02: PASS |
DATA FROM DM01CEL02 FOR VERIFY CELLDISK CONFIGURATION ON DISK DRIVES name: CD_00_dm01cel02 comment: creationTime: 2012-11-28T10:30:41+08:00 deviceName: /dev/sda devicePartition: /dev/sda3 diskType: HardDisk errorCount: 0 freeSpace: 0 id: 6901754a-39ff-4d2a-8777-d5a1ac70f914 interleaving: none lun: 0_0 physicalDisk: L45T67 raidLevel: 0 size: 1832.59375G status: normal |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => The total size of all griddisks fully utilizes celldisk capacity |
DATA FROM DM01CEL01 FOR VERIFY TOTAL SIZE OF ALL GRIDDISKS FULLY UTILIZES CELLDISK CAPACITY Cell Disks Size = 22281.7 Grid Disks Size = 22282.2 DATA FROM DM01CEL02 FOR VERIFY TOTAL SIZE OF ALL GRIDDISKS FULLY UTILIZES CELLDISK CAPACITY Cell Disks Size = 22281.7 Grid Disks Size = 22282.2 |
Status on +ASM1: PASS => ASM parameter ASM_POWER_LIMIT is set to the default value. |
+ASM1.asm_power_limit = 1 |
Status on +ASM2: PASS => ASM parameter ASM_POWER_LIMIT is set to the default value. |
+ASM2.asm_power_limit = 1 |
Status on dm01db02: PASS => DNS Server ping time is in acceptable range |
Active DNS Server IP: 10.187.4.86 Average for 10 pings in ms: 0.1839 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => DNS Server ping time is in acceptable range |
DATA FROM DM01CEL01 FOR VERIFY AVERAGE PING TIMES TO DNS NAMESERVER Active DNS Server IP: 10.187.4.86 Average for 10 pings in ms: 0.2088 DATA FROM DM01CEL02 FOR VERIFY AVERAGE PING TIMES TO DNS NAMESERVER Active DNS Server IP: 10.187.4.86 Average for 10 pings in ms: 0.3 |
Status on +ASM1: PASS => ASM parameter MEMORY_MAX_TARGET is set according to recommended value |
+ASM1.memory_max_target = 0 |
Status on +ASM2: PASS => ASM parameter MEMORY_MAX_TARGET is set according to recommended value |
+ASM2.memory_max_target = 0 |
Status on +ASM1: PASS => ASM parameter PGA_AGGREGATE_TARGET is set according to recommended value |
+ASM1.pga_aggregate_target = 419430400 |
Status on +ASM2: PASS => ASM parameter PGA_AGGREGATE_TARGET is set according to recommended value |
+ASM2.pga_aggregate_target = 419430400 |
Status on +ASM1: PASS => ASM parameter MEMORY_TARGET is set according to recommended value |
+ASM1.memory_target = 0 |
Status on +ASM2: PASS => ASM parameter MEMORY_TARGET is set according to recommended value |
+ASM2.memory_target = 0 |
Status on +ASM1: PASS => ASM parameter SGA_TARGET is set according to recommended value. |
+ASM1.sga_target = 1325400064 |
Status on +ASM2: PASS => ASM parameter SGA_TARGET is set according to recommended value. |
+ASM2.sga_target = 1325400064 |
Status on MCSDB1: PASS => fast_start_mttr_target has been changed from default |
MCSDB1.fast_start_mttr_target = 300 |
Status on MCSDB2: PASS => fast_start_mttr_target has been changed from default |
MCSDB2.fast_start_mttr_target = 300 |
Status on MCSDB: FAIL => Remote destination not is using either ASYNC or SYNC transport for redo transport |
DATA FOR MCSDB FOR REDO TRANSPORT PROTOCOL |
Status on MCSDB1: PASS => Database parameter UNDO_RETENTION is not null |
MCSDB1.undo_retention = 900 |
Status on MCSDB2: PASS => Database parameter UNDO_RETENTION is not null |
MCSDB2.undo_retention = 900 |
Status on dm01db01: PASS => Clusterware is running |
DATA FROM DM01DB01 - MCSDB DATABASE - CLUSTERWARE STATUS -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA_DM01.dg ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.DBFS_DG.dg ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.LISTENER.lsnr ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.RECO_DM01.dg ONLINE ONLINE dm01db01 |
Status on dm01db02: PASS => Clusterware is running |
-------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA_DM01.dg ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.DBFS_DG.dg ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.LISTENER.lsnr ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.RECO_DM01.dg ONLINE ONLINE dm01db01 ONLINE ONLINE dm01db02 ora.asm ONLINE ONLINE dm01db01 Started ONLINE ONLINE dm01db02 Started |
Status on MCSDB: WARNING => Logical standby unsupported datatypes found |
DATA FOR MCSDB FOR LOGICAL STANDBY UNSUPPORTED DATATYPES DEV5_SOAINFRA AQ$_IP_QTAB_I DEV5_SOAINFRA AQ$_EDN_EVENT_QUEUE_TABLE_H DEV5_SOAINFRA AQ$_EDN_EVENT_QUEUE_TABLE_G DEV5_SOAINFRA AQ$_EDN_OAOO_DELIVERY_TABLE_I DEV6_BIPLATFORM SDCLEANUPLIST2 DEV1_SOAINFRA AQ$_IP_QTAB_I DEV1_SOAINFRA AQ$_EDN_EVENT_QUEUE_TABLE_G DEV1_SOAINFRA AQ$_EDN_OAOO_DELIVERY_TABLE_G GIS MAP_STATION_POI DEV1_SOAINFRA AQ$_IP_QTAB_S DEV1_SOAINFRA AQ$_IP_QTAB_H DEV1_SOAINFRA AQ$_EDN_EVENT_QUEUE_TABLE_T DEV1_SOAINFRA AQ$_EDN_EVENT_QUEUE_TABLE_I DEV1_SOAINFRA AQ$_EDN_OAOO_DELIVERY_TABLE_T |
Status on MCSDB: FAIL => Standby is not running in MANAGED REAL TIME APPLY mode |
DATA FOR MCSDB FOR STANDBY RECOVERY MODE |
Status on MCSDB: FAIL => Standby redo logs are not configured on both sites |
DATA FOR MCSDB FOR STANDBY REDOLOG STATUS ON PRIMARY |
Status on MCSDB: PASS => At least one high redundancy diskgroup configured |
DATA FOR MCSDB FOR HIGH REDUNDANCY DISKGROUPS DATA_DM01 HIGH DBFS_DG NORMAL RECO_DM01 NORMAL |
Status on MCSDB: FAIL => Physical standby status is not valid |
DATA FOR MCSDB FOR PHYSICAL STANDBY STATUS |
Status on MCSDB: FAIL => Flashback is not configured |
DATA FOR MCSDB FOR FLASHBACK DATABASE ON PRIMARY primary_flashback = NO |
Status on dm01db02: WARNING => Database parameter DB_BLOCK_CHECKING is NOT set to the recommended value. |
DB_BLOCK_CHECKING = FALSE |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | |
Needs attention on | dm01cel03, dm01cel02, dm01cel01 |
Passed on | - |
Status on dm01cel03, dm01cel02, dm01cel01: FAIL => one or storage server has open critical alerts. |
DATA FROM DM01CEL01 FOR SCAN STORAGE SERVER ALERTHISTORY FOR OPEN ALERTS 6_1 2012-11-28T10:33:05+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_2 2013-01-16T15:41:00+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Checking NTP server on 10.6.2.171 : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_3 2013-01-17T15:40:56+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_4 2013-02-21T15:48:57+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Checking NTP server on 10.187.4.86 : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_5 2013-02-22T15:48:58+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_6 2013-05-23T15:49:12+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Checking NTP server on 10.187.4.86 : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_7 2013-05-24T15:49:13+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_8 2013-08-26T15:51:01+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf Checking DNS server on 10.187.0.206 : FAILED DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_9 2013-08-27T15:50:46+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_10 2014-03-08T15:55:33+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Checking NTP server on 10.6.2.171 : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_11 2014-03-11T15:55:29+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_12 2014-03-30T15:57:11+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf Checking DNS server on 10.187.0.206 : FAILED DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 6_13 2014-03-31T15:56:56+08:00 "Cell configuration check discovered the following problems: Check Exadata configuration via ipconf utility Verifying of Exadata configuration file /opt/oracle.cellos/cell.conf DNS server 10.187.0.206 exists only in Exadata configuration file : FAILED Error. Overall status of verification of Exadata configuration file: FAILED [INFO] The ipconf check may generate a failure for temporary inability to reach NTP or DNS server. You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] You may ignore this alert, if the NTP or DNS servers are valid and available. [INFO] As root user run /usr/local/bin/ipconf -verify -semantic to verify consistent network configurations." 8_1 2012-12-03T12:38:31+08:00 "File system "/" is 80% full, which is above the 80% threshold. Accelerated space reclamation has started. This alert will be cleared when file system "/" becomes less than 75% full. Top three directories ordered by total space usage are as follows: /opt : 3.68G /usr : 2.4G /root : 1.55G" 60_1 2013-12-06T18:39:58+08:00 "Data hard disk entered predictive failure status. Status : WARNING - PREDICTIVE FAILURE Manufacturer : SEAGATE Model Number : ST32000SSSUN2.0T Size : 2.0TB Serial Number : 1108L45RD9 Firmware : 061A Slot Number : 2 Cell Disk : CD_02_dm01cel01 Grid Disk : RECO_DM01_CD_02_dm01cel01, DATA_DM01_CD_02_dm01cel01, DBFS_DG_CD_02_dm01cel01" 60_2 2013-12-06T19:13:08+08:00 "Data hard disk failed. Status : CRITICAL Manufacturer : SEAGATE Model Number : ST32000SSSUN2.0T Size : 2.0T Serial Number : 1108L45RD9 Firmware : 061A Slot Number : 2 Cell Disk : CD_02_dm01cel01 Grid Disk : RECO_DM01_CD_02_dm01cel01, DATA_DM01_CD_02_dm01cel01, DBFS_DG_CD_02_dm01cel01" |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Smart flash log is created on all storage server |
DATA FROM DM01CEL01 FOR VERIFY EXADATA SMART FLASH LOG IS CREATED name: dm01cel01_FLASHLOG cellDisk: FD_08_dm01cel01,FD_06_dm01cel01,FD_09_dm01cel01,FD_12_dm01cel01,FD_02_dm01cel01,FD_01_dm01cel01,FD_04_dm01cel01,FD_14_dm01cel01,FD_05_dm01cel01,FD_13_dm01cel01,FD_10_dm01cel01,FD_00_dm01cel01,FD_07_dm01cel01,FD_15_dm01cel01,FD_11_dm01cel01,FD_03_dm01cel01 creationTime: 2012-11-28T10:31:16+08:00 degradedCelldisks: effectiveSize: 512M efficiency: 100.0 id: 463e749c-cdbf-469d-abad-283c3eb39f0b size: 512M status: normal DATA FROM DM01CEL02 FOR VERIFY EXADATA SMART FLASH LOG IS CREATED |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Storage Server Flash Memory is configured as Exadata Smart Flash Cache |
DATA FROM DM01CEL01 FOR VERIFY EXADATA SMART FLASH CACHE IS CREATED NOTE: Look for size or size: name: dm01cel01_FLASHCACHE cellDisk: FD_07_dm01cel01,FD_13_dm01cel01,FD_09_dm01cel01,FD_05_dm01cel01,FD_10_dm01cel01,FD_00_dm01cel01,FD_04_dm01cel01,FD_14_dm01cel01,FD_11_dm01cel01,FD_08_dm01cel01,FD_15_dm01cel01,FD_06_dm01cel01,FD_02_dm01cel01,FD_01_dm01cel01,FD_03_dm01cel01,FD_12_dm01cel01 creationTime: 2012-11-28T10:31:41+08:00 degradedCelldisks: effectiveCacheSize: 364.75G id: 71c5504c-606e-4142-a5ef-e195dca88900 size: 364.75G status: normal DATA FROM DM01CEL02 FOR VERIFY EXADATA SMART FLASH CACHE IS CREATED |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Peripheral component interconnect (PCI) bridge is configured for generation II on all storage servers |
DATA FROM DM01CEL01 FOR VERIFY PCI BRIDGE IS CONFIGURED FOR GENERATION II ON STORAGE SERVERS 19:0.0 82 27:0.0 82 DATA FROM DM01CEL02 FOR VERIFY PCI BRIDGE IS CONFIGURED FOR GENERATION II ON STORAGE SERVERS 19:0.0 82 27:0.0 82 |
Status on dm01db02: WARNING => Database parameter DB_BLOCK_CHECKING is NOT set to the recommended value. |
DB_BLOCK_CHECKING = FALSE |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => There are no griddisks configured on flash memory devices |
DATA FROM DM01CEL01 FOR VERIFY THERE ARE NO GRIDDISKS CONFIGURED ON FLASH MEMORY DEVICES name: DATA_DM01_CD_00_dm01cel01 asmDiskgroupName: DATA_DM01 asmDiskName: DATA_DM01_CD_00_DM01CEL01 asmFailGroupName: DM01CEL01 availableTo: cachingPolicy: default cellDisk: CD_00_dm01cel01 comment: creationTime: 2012-11-28T10:33:54+08:00 diskType: HardDisk errorCount: 0 id: 01f726cc-700b-485a-9eaa-694eb0849b2a offset: 32M size: 1562G status: active |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => No Storage Server conventional or flash disks have a performance problem |
DATA FROM DM01CEL01 FOR VERIFY STORAGE SERVER METRIC CD_IO_ST_RQ DATA FROM DM01CEL02 FOR VERIFY STORAGE SERVER METRIC CD_IO_ST_RQ |
Status on dm01db02: PASS => Only one non-ASM instance discovered |
oracle 9353 1 0 Feb08 ? 00:48:45 ora_pmon_MCSDB2 |
Status on dm01db01: PASS => Database parameters log_archive_dest_n with Location attribute are all set to recommended value |
DATA FROM DM01DB01 - MCSDB DATABASE - LOG_ARCHIVE_DEST_N |
Status on dm01db02: PASS => Database parameters log_archive_dest_n with Location attribute are all set to recommended value |
Status on dm01db01: PASS => Database parameter COMPATIBLE is set to recommended value |
DATA FROM DM01DB01 - MCSDB DATABASE - COMPATIBLE instance_vesion = 11.2.0.3.0 and compatible = 11.2.0.3.0 |
Status on dm01db02: PASS => Database parameter COMPATIBLE is set to recommended value |
instance_vesion = 11.2.0.3.0 and compatible = 11.2.0.3.0 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => All InfiniBand network cables are connected on all Storage Servers |
DATA FROM DM01CEL01 FOR VERIFY INFINIBAND CABLE CONNECTION QUALITY ON STORAGE SERVERS /sys/class/net/ib0/carrier = 1 /sys/class/net/ib1/carrier = 1 DATA FROM DM01CEL02 FOR VERIFY INFINIBAND CABLE CONNECTION QUALITY ON STORAGE SERVERS /sys/class/net/ib0/carrier = 1 /sys/class/net/ib1/carrier = 1 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => All Ethernet network cables are connected on all Storage Servers |
DATA FROM DM01CEL01 FOR VERIFY ETHERNET CABLE CONNECTION QUALITY ON STORAGE SERVERS /sys/class/net/eth0/carrier = 1 /sys/class/net/eth1/carrier = /sys/class/net/eth2/carrier = /sys/class/net/eth3/carrier = DATA FROM DM01CEL02 FOR VERIFY ETHERNET CABLE CONNECTION QUALITY ON STORAGE SERVERS /sys/class/net/eth0/carrier = 1 /sys/class/net/eth1/carrier = /sys/class/net/eth2/carrier = /sys/class/net/eth3/carrier = |
Status on dm01db02: PASS => All InfiniBand network cables are connected |
/sys/class/net/ib0/carrier = 1 /sys/class/net/ib1/carrier = 1 |
Status on dm01db02: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO_DM01 Total Space = 8522GB db_recovery_file_dest_size= 2048GB |
Status on dm01db02: PASS => Database DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST are in different diskgroups |
db_recovery_file_dest = +RECO_DM01 db_create_file_dest = NONE SPECIFIED |
Status on MCSDB1: PASS => Database parameter PARALLEL_EXECUTION_MESSAGE_SIZE is set to recommended value |
MCSDB1.parallel_execution_message_size = 16384 |
Status on MCSDB2: PASS => Database parameter PARALLEL_EXECUTION_MESSAGE_SIZE is set to recommended value |
MCSDB2.parallel_execution_message_size = 16384 |
Status on MCSDB1: PASS => Database parameter SQL92_SECURITY is set to recommended value |
MCSDB1.sql92_security = TRUE |
Status on MCSDB2: PASS => Database parameter SQL92_SECURITY is set to recommended value |
MCSDB2.sql92_security = TRUE |
Status on MCSDB1: FAIL => Database parameter USE_LARGE_PAGES is NOT set to recommended value |
MCSDB1.use_large_pages = TRUE |
Status on MCSDB2: FAIL => Database parameter USE_LARGE_PAGES is NOT set to recommended value |
MCSDB2.use_large_pages = TRUE |
Status on MCSDB1: PASS => Database parameter OPEN_CURSORS is set to recommended value |
MCSDB1.open_cursors = 3000 |
Status on MCSDB2: PASS => Database parameter OPEN_CURSORS is set to recommended value |
MCSDB2.open_cursors = 3000 |
Status on MCSDB1: PASS => Database parameter OS_AUTHENT_PREFIX is set to recommended value |
MCSDB1.os_authent_prefix = |
Status on MCSDB2: PASS => Database parameter OS_AUTHENT_PREFIX is set to recommended value |
MCSDB2.os_authent_prefix = |
Status on MCSDB1: PASS => Database parameter PARALLEL_THREADS_PER_CPU is set to recommended value |
MCSDB1.parallel_threads_per_cpu = 1 |
Status on MCSDB2: PASS => Database parameter PARALLEL_THREADS_PER_CPU is set to recommended value |
MCSDB2.parallel_threads_per_cpu = 1 |
Status on MCSDB1: PASS => Database parameter _ENABLE_NUMA_SUPPORT is set to recommended value |
_enable_NUMA_support = FALSE |
Status on MCSDB2: PASS => Database parameter _ENABLE_NUMA_SUPPORT is set to recommended value |
_enable_NUMA_support = FALSE |
Status on MCSDB1: PASS => Database parameter PARALLEL_ADAPTIVE_MULTI_USER is set to recommended value |
MCSDB1.parallel_adaptive_multi_user = FALSE |
Status on MCSDB2: PASS => Database parameter PARALLEL_ADAPTIVE_MULTI_USER is set to recommended value |
MCSDB2.parallel_adaptive_multi_user = FALSE |
Status on MCSDB1: PASS => Database parameter _file_size_increase_increment is set to the recommended value |
_file_size_increase_increment = 2044M |
Status on MCSDB2: PASS => Database parameter _file_size_increase_increment is set to the recommended value |
_file_size_increase_increment = 2044M |
Status on MCSDB1: FAIL => Database parameter GLOBAL_NAMES is NOT set to recommended value |
MCSDB1.global_names = FALSE |
Status on MCSDB2: FAIL => Database parameter GLOBAL_NAMES is NOT set to recommended value |
MCSDB2.global_names = FALSE |
Status on MCSDB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCSDB1.db_lost_write_protect = typical |
Status on MCSDB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCSDB2.db_lost_write_protect = typical |
Status on MCSDB1: PASS => Database parameter LOG_BUFFER is set to recommended value |
MCSDB1.log_buffer = 134217728 |
Status on MCSDB2: PASS => Database parameter LOG_BUFFER is set to recommended value |
MCSDB2.log_buffer = 134217728 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Disk cache policy is set to Disabled on all storage server |
DATA FROM DM01CEL01 FOR VERIFY DISK CACHE POLICY ON STORAGE SERVER Disk Cache Policy : Disabled Slot Number: 0 Disk Cache Policy : Disabled Slot Number: 1 Disk Cache Policy : Disabled Slot Number: 3 Disk Cache Policy : Disabled Slot Number: 4 Disk Cache Policy : Disabled Slot Number: 5 Disk Cache Policy : Disabled Slot Number: 6 Disk Cache Policy : Disabled Slot Number: 7 Disk Cache Policy : Disabled Slot Number: 8 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Electronic Storage Module (ESM) Lifetime is within specification for all flash cards on all storage servers |
DATA FROM DM01CEL01 FOR VERIFY ELECTRONIC STORAGE MODULE (ESM) LIFETIME IS WITHIN SPECIFICATION /SYS/MB/RISER1/PCIE1/F20CARD is an F20M2 model and this esm lifetime check does not apply. /SYS/MB/RISER1/PCIE4/F20CARD is an F20M2 model and this esm lifetime check does not apply. /SYS/MB/RISER2/PCIE2/F20CARD is an F20M2 model and this esm lifetime check does not apply. /SYS/MB/RISER2/PCIE5/F20CARD is an F20M2 model and this esm lifetime check does not apply. DATA FROM DM01CEL02 FOR VERIFY ELECTRONIC STORAGE MODULE (ESM) LIFETIME IS WITHIN SPECIFICATION |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Management network is separate from data network on all storage servers |
DATA FROM DM01CEL01 FOR DATA NETWORK IS SEPARATE FROM MANAGEMENT NETWORK ON STORAGE SERVER ifcfg-bondib0:NETWORK=192.168.8.0 ifcfg-eth0:NETWORK=10.187.5.0 DATA FROM DM01CEL02 FOR DATA NETWORK IS SEPARATE FROM MANAGEMENT NETWORK ON STORAGE SERVER ifcfg-bondib0:NETWORK=192.168.8.0 ifcfg-eth0:NETWORK=10.187.5.0 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Ambient temperature is within the recommended range. |
DATA FROM DM01CEL01 FOR AMBIENT TEMPERATURE value = 25.000 degree C DATA FROM DM01CEL02 FOR AMBIENT TEMPERATURE value = 26.000 degree C DATA FROM DM01CEL03 FOR AMBIENT TEMPERATURE |
Status on dm01db02: PASS => Oracle ASM Communication is using RDS protocol on Infiniband Network |
rds |
Status on dm01db01: FAIL => InfiniBand network error counters are non-zero |
DATA FROM DM01DB01 FOR INFINIBAND SWITCH COUNTERS ON ALL SWITCHS Suppressing: RcvSwRelayErrors XmtDiscards XmtWait Errors for 0x2128e8b08ea0a0 "SUN DCS 36P QDR dm01sw-ib3.mcsdb.com" GUID 0x2128e8b08ea0a0 port 7: [RcvErrors == 1] Link info: 2 7[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x0021280001cf4886 8 2[ ] "SUN DCS 36P QDR dm01sw-ib3.mcsdb.com" ( ) Errors for 0x2128e8af6da0a0 "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" GUID 0x2128e8af6da0a0 port 7: [RcvErrors == 1] Link info: 1 7[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x0021280001cf4886 7 1[ ] "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" ( ) GUID 0x2128e8af6da0a0 port 10: [RcvErrors == 1] Link info: 1 10[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x0021280001cf4862 3 1[ ] "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" ( ) GUID 0x2128e8af6da0a0 port 13: [LinkDowned == 1] Link info: 1 13[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x002128e8b08ea0a0 2 14[ ] "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" ( ) GUID 0x2128e8af6da0a0 port 14: [LinkDowned == 1] Link info: 1 14[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x002128e8b08ea0a0 2 13[ ] "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" ( ) GUID 0x2128e8af6da0a0 port 15: [LinkDowned == 1] Link info: 1 15[ ] ==( 4X 10.0 Gbps Active/ LinkUp)==> 0x002128e8b08ea0a0 2 16[ ] "SUN DCS 36P QDR dm01sw-ib2.mcsdb.com" ( ) GUID 0x2128e8af6da0a0 port 16: [LinkDowned == 1] |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Software profile check is successful on all storage servers. |
DATA FROM DM01CEL01 FOR VERIFY SOFTWARE ON STORAGE SERVERS (CHECKSWPROFILE.SH) [INFO] [1;32mSUCCESS[0m: Meets requirements of operating platform and InfiniBand software. [INFO] Check does NOT verify correctness of configuration for installed software. DATA FROM DM01CEL02 FOR VERIFY SOFTWARE ON STORAGE SERVERS (CHECKSWPROFILE.SH) [INFO] [1;32mSUCCESS[0m: Meets requirements of operating platform and InfiniBand software. [INFO] Check does NOT verify correctness of configuration for installed software. |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => Hardware and firmware profile check is successful on all storage servers. |
DATA FROM DM01CEL01 FOR VERIFY HARDWARE AND FIRMWARE ON DATABASE AND STORAGE SERVERS (CHECKHWNFWPROFILE) [STORAGE SERVER] [40;1;32m[SUCCESS][0m The hardware and firmware profile matches one of the supported profiles DATA FROM DM01CEL02 FOR VERIFY HARDWARE AND FIRMWARE ON DATABASE AND STORAGE SERVERS (CHECKHWNFWPROFILE) [STORAGE SERVER] [40;1;32m[SUCCESS][0m The hardware and firmware profile matches one of the supported profiles DATA FROM DM01CEL03 FOR VERIFY HARDWARE AND FIRMWARE ON DATABASE AND STORAGE SERVERS (CHECKHWNFWPROFILE) [STORAGE SERVER] |
Status on MCSDB: FAIL => Some data or temp files are not autoextensible |
DATA FOR MCSDB FOR NON-AUTOEXTENSIBLE DATA AND TEMP FILES +DATA_DM01/mcsdb/datafile/dev1_oim.447.802540729 +DATA_DM01/mcsdb/datafile/dev5_apm.391.801068887 +DATA_DM01/mcsdb/datafile/dev5_brsadata.381.801068881 +DATA_DM01/mcsdb/datafile/dev5_brsaindx.369.801068877 +DATA_DM01/mcsdb/datafile/dev5_ias_iau.393.801068887 +DATA_DM01/mcsdb/datafile/dev5_ias_oif.417.801068897 +DATA_DM01/mcsdb/datafile/dev5_ias_orasdpm.416.801068897 +DATA_DM01/mcsdb/datafile/dev5_mds.419.801068897 +DATA_DM01/mcsdb/datafile/dev5_oam.418.801068897 +DATA_DM01/mcsdb/datafile/dev5_oim.413.801068895 +DATA_DM01/mcsdb/datafile/dev5_oim_lob.387.801068885 +DATA_DM01/mcsdb/datafile/dev5_soainfra.374.801068879 +DATA_DM01/mcsdb/datafile/dev5_tbs_oaam_data.384.801068883 +DATA_DM01/mcsdb/datafile/dev5_tbs_oaam_data_apr.371.801068877 |
Status on dm01db01: PASS => Remote listener is set to SCAN name |
DATA FROM DM01DB01 - MCSDB DATABASE - REMOTE LISTENER SET TO SCAN NAME remote listener name=dm01-scan scan name= dm01-scan |
Status on dm01db02: PASS => Remote listener is set to SCAN name |
remote listener name=dm01-scan scan name= dm01-scan |
Status on dm01db01: PASS => NTP is running with correct setting |
DATA FROM DM01DB01 - MCSDB DATABASE - NTP WITH CORRECT SETTING ntp 5477 1 0 Feb08 ? 00:05:56 ntpd -u ntp:ntp -p /var/run/ntpd.pid -x |
Status on dm01db02: PASS => NTP is running with correct setting |
ntp 5462 1 0 Feb08 ? 00:05:50 ntpd -u ntp:ntp -p /var/run/ntpd.pid -x |
Status on dm01db02: PASS => Interconnect is configured on non-routable network addresses |
bondib0 192.168.8.0 global cluster_interconnect |
Status on MCSDB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCSDB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => OSWatcher is running on all storage servers |
DATA FROM DM01CEL01 FOR OSWATCHER STATUS ON STORAGE SERVERS NOTE: No output would indicate OSWatcher not running root 15556 1 0 04:02 ? 00:00:04 /bin/ksh ./OSWatcher.sh 15 168 bzip2 3 root 16301 15556 0 11:00 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_cellsrvstat.sh root 24275 24273 0 11:05 ? 00:00:00 grep -i osw root 26519 15556 0 04:02 ? 00:00:04 /bin/ksh ./OSWatcherFM.sh 168 3 root 26539 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_vmstat.sh root 26540 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_mpstat.sh root 26541 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_netstat.sh root 26542 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_iostat.sh root 26543 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_diskstats.sh root 26548 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq ./Exadata_top.sh root 26559 15556 0 04:02 ? 00:00:00 /bin/ksh ./oswsub.sh HighFreq /opt/oracle.oswatcher/osw/ExadataRdsInfo.sh root 26579 26559 0 04:02 ? 00:00:04 /bin/bash /opt/oracle.oswatcher/osw/ExadataRdsInfo.sh HighFreq |
Status on MCSDB: WARNING => Some tablespaces are not using Automatic segment storage management. |
DATA FOR MCSDB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT OLTS_BATTRSTORE OLTS_SVRMGSTORE |
Status on dm01db01: PASS => ORA_CRS_HOME environment variable is not set |
DATA FROM DM01DB01 - MCSDB DATABASE - NO CRS HOME ENV VARIABLE SUDOCMD=/usr/bin/sudo HOSTNAME=dm01db01.mcsdb.com SHELL=/bin/bash TERM=xterm HISTSIZE=1000 CRS_HOME=/u01/app/11.2.0.3/grid ORACLE_UNQNAME=MCSDB USER=oracle LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.3/dbhome_1/jdk/lib:/u01/app/oracle/product/11.2.0.3/dbhome_1/lib:/u01/app/11.2.0.3/grid/lib LS_COLORS=no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35: ORACLE_SID=MCSDB1 ORACLE_BASE=/u01/app/oracle MAIL=/var/spool/mail/oracle PATH=/u01/app/oracle/product/11.2.0.3/dbhome_1/bin:/u01/app/oracle/product/11.2.0.3/dbhome_1/jdk/bin:/u01/app/oracle/product/11.2.0.3/dbhome_1/bin:/u01/app/oracle/product/11.2.0.3/dbhome_1/jdk/bin:/usr/local/bin:/bin:/usr/bin:.:/u01/app/oracle/product/11.2.0.3/dbhome_1/bin INPUTRC=/etc/inputrc PWD=/opt/oracle.SupportTools/exachk |
Status on MCSDB: PASS => GC blocks lost is not occurring |
DATA FOR MCSDB FOR GC BLOCK LOST No of GC lost block in last 24 hours = 0 |
Status on dm01db01: PASS => NIC bonding mode is not set to Broadcast(3) for cluster interconnect |
DATA FROM DM01DB01 - MCSDB DATABASE - NIC BONDING MODE INTERCONNECT Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: None Currently Active Slave: ib0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 5000 Down Delay (ms): 5000 Slave Interface: ib0 MII Status: up Link Failure Count: 0 Permanent HW addr: 80:00:00:48:fe:80 Slave queue ID: 0 |
Status on dm01db02: PASS => NIC bonding mode is not set to Broadcast(3) for cluster interconnect |
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active) Primary Slave: None Currently Active Slave: ib0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 5000 Down Delay (ms): 5000 Slave Interface: ib0 MII Status: up Link Failure Count: 0 Permanent HW addr: 80:00:00:48:fe:80 Slave queue ID: 0 Slave Interface: ib1 MII Status: up Link Failure Count: 0 Permanent HW addr: 80:00:00:49:fe:80 |
Status on MCSDB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCSDB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on dm01db01: PASS => SELinux is not being Enforced. |
DATA FROM DM01DB01 - MCSDB DATABASE - SELINUX STATUS Disabled |
Status on dm01db02: PASS => SELinux is not being Enforced. |
Disabled |
Status on dm01db02: PASS => $ORACLE_HOME/bin/oradism ownership is root |
-rwsr-x--- 1 root oinstall 71758 Nov 28 2012 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism |
Status on dm01db01: PASS => NIC bonding is configured for interconnect |
DATA FROM DM01DB01 - MCSDB DATABASE - INTERCONNECT NIC BONDING CONFIG. bondib0 192.168.8.0 global cluster_interconnect |
Status on dm01db02: PASS => NIC bonding is configured for interconnect |
bondib0 192.168.8.0 global cluster_interconnect |
Status on dm01db02: PASS => $ORACLE_HOME/bin/oradism setuid bit is set |
-rwsr-x--- 1 root oinstall 71758 Nov 28 2012 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | |
Needs attention on | - |
Passed on | dm01db01, dm01db02 |
Status on dm01db01: PASS => NIC bonding mode is not set to Broadcast(3) for public network |
DATA FROM DM01DB01 - MCSDB DATABASE - NIC BONDING MODE PUBLIC NOTE: Look for Bonding Mode: Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 5000 Down Delay (ms): 5000 Slave Interface: eth1 MII Status: up Link Failure Count: 0 |
Status on dm01db02: PASS => NIC bonding mode is not set to Broadcast(3) for public network |
NOTE: Look for Bonding Mode: Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 5000 Down Delay (ms): 5000 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:21:28:e7:c0:a5 Slave queue ID: 0 Slave Interface: eth2 |
Status on dm01db01: PASS => Shell limit hard stack for GI is configured according to recommendation |
DATA FROM DM01DB01 FOR CRS USER LIMITS CONFIGURATION Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 1572864 unlimited bytes Max core file size unlimited unlimited bytes Max resident set unlimited unlimited bytes Max processes 773848 773848 processes Max open files 65536 65536 files Max locked memory unlimited unlimited bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 773848 773848 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 |
Status on dm01db02: PASS => Shell limit hard stack for GI is configured according to recommendation |
DATA FROM DM01DB02 FOR CRS USER LIMITS CONFIGURATION Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 1572864 unlimited bytes Max core file size unlimited unlimited bytes Max resident set unlimited unlimited bytes Max processes 773848 773848 processes Max open files 65536 65536 files Max locked memory unlimited unlimited bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 773848 773848 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 |
Status on dm01db01: PASS => Shell limit soft nproc for DB is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - DB SHELL LIMITS SOFT NPROC oracle soft nproc 131072 |
Status on dm01db02: PASS => Shell limit soft nproc for DB is configured according to recommendation |
oracle soft nproc 131072 |
Status on dm01db01: PASS => Shell limit hard stack for DB is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - DB SHELL LIMITS HARD STACK oracle hard stack unlimited |
Status on dm01db02: PASS => Shell limit hard stack for DB is configured according to recommendation |
oracle hard stack unlimited |
Status on dm01db01: PASS => Shell limit hard nofile for DB is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - DB SHELL LIMITS HARD NOFILE oracle hard nofile 65536 |
Status on dm01db02: PASS => Shell limit hard nofile for DB is configured according to recommendation |
oracle hard nofile 65536 |
Status on dm01db01: PASS => Shell limit soft nofile for DB is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - DB SHELL LIMITS SOFT NOFILE oracle soft nofile 65536 |
Status on dm01db02: PASS => Shell limit soft nofile for DB is configured according to recommendation |
oracle soft nofile 65536 |
Status on dm01db01: PASS => Shell limit hard nproc for DB is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - DB SHELL LIMITS HARD NPROC oracle hard nproc 131072 |
Status on dm01db02: PASS => Shell limit hard nproc for DB is configured according to recommendation |
oracle hard nproc 131072 |
Status on dm01db01: PASS => Shell limit hard nproc for GI is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - GI SHELL LIMITS HARD NPROC oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 |
Status on dm01db02: PASS => Shell limit hard nproc for GI is configured according to recommendation |
oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 grid soft nofile 65536 grid hard nofile 65536 grid soft memlock unlimited grid hard memlock unlimited |
Status on dm01db01: PASS => Shell limit hard nofile for GI is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - GI SHELL LIMITS HARD NOFILE oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 |
Status on dm01db02: PASS => Shell limit hard nofile for GI is configured according to recommendation |
oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 grid soft nofile 65536 grid hard nofile 65536 grid soft memlock unlimited grid hard memlock unlimited |
Status on dm01db01: PASS => Shell limit soft nproc for GI is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - GI SHELL LIMITS SOFT NPROC oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 |
Status on dm01db02: PASS => Shell limit soft nproc for GI is configured according to recommendation |
oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 grid soft nofile 65536 grid hard nofile 65536 grid soft memlock unlimited grid hard memlock unlimited |
Status on dm01db01: PASS => Shell limit soft nofile for GI is configured according to recommendation |
DATA FROM DM01DB01 - MCSDB DATABASE - GI SHELL LIMITS SOFT NOFILE oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 |
Status on dm01db02: PASS => Shell limit soft nofile for GI is configured according to recommendation |
oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock unlimited oracle hard memlock unlimited grid soft core unlimited grid hard core unlimited grid soft nproc 131072 grid hard nproc 131072 grid soft nofile 65536 grid hard nofile 65536 grid soft memlock unlimited grid hard memlock unlimited |
Status on dm01db02: PASS => Management network is separate from data network |
ifcfg-bondeth0:NETWORK=10.187.4.0 ifcfg-bondib0:NETWORK=192.168.8.0 ifcfg-eth0:NETWORK=10.187.5.0 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => RAID controller battery temperature is normal [Storage Server] |
DATA FROM DM01CEL01 FOR VERIFY RAID CONTROLLER BATTERY TEMPERATURE [STORAGE SERVER] Temperature: 39 C Temperature : OK Over Temperature : No DATA FROM DM01CEL02 FOR VERIFY RAID CONTROLLER BATTERY TEMPERATURE [STORAGE SERVER] Temperature: 38 C Temperature : OK Over Temperature : No |
Status on dm01db02: PASS => ASM Audit file destination file count <= 100,000 |
Number of audit files at /u01/app/11.2.0.3/grid/rdbms/audit = 38 |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | |
Needs attention on | - |
Passed on | dm01cel03, dm01cel02, dm01cel01 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => All Exadata storage server meet system model number requirement |
DATA FROM DM01CEL01 FOR EXADATA STORAGE SERVER SYSTEM MODEL NUMBER NOTE: Look for system_descritpion = Connected. Use ^D to exit. -> show /SP system_description /SP Properties: system_description = SUN FIRE X4270 M2 SERVER, ILOM v3.0.16.10.d, r74499 -> Session closed Disconnected |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | |
Needs attention on | - |
Passed on | dm01db01, dm01db02 |
Status on dm01db01: PASS => Number of Mounts before a File System check is set to -1 for system disk |
DATA FROM DM01DB01 FOR NUMBER OF MOUNTS BEFORE A FILE SYSTEM CHECK NOTE: Look for Maximum mount count tune2fs 1.39 (29-May-2006) Filesystem volume name: BOOT Last mounted on: <not available> Filesystem UUID: d8d92dff-d116-403a-8476-9cb0cce12bd4 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal resize_inode dir_index filetype needs_recovery sparse_super Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32128 Block count: 128488 |
Status on dm01db02: PASS => Number of Mounts before a File System check is set to -1 for system disk |
DATA FROM DM01DB02 FOR NUMBER OF MOUNTS BEFORE A FILE SYSTEM CHECK NOTE: Look for Maximum mount count tune2fs 1.39 (29-May-2006) Filesystem volume name: BOOT Last mounted on: <not available> Filesystem UUID: 6ab16708-dba7-43ae-8629-e05ce8cfce1a Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal resize_inode dir_index filetype needs_recovery sparse_super Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32128 Block count: 128488 |
Status on dm01db02: PASS => Free space in root(/) filesystem meets or exceeds recommendation. |
Filesystem Size Used Avail Use% Mounted on /dev/mapper/VGExaDb-LVDbSys1 30G 9.5G 19G 34% / |
Status on dm01cel03, dm01cel02, dm01cel01: WARNING => Free space in root(/) filesystem is less than recommended on one or more storage servers. |
DATA FROM DM01CEL01 FOR EXADATA STORAGE SERVER ROOT FILESYSTEM FREE SPACE Filesystem Size Used Avail Use% Mounted on /dev/md6 9.9G 8.8G 593M 94% / DATA FROM DM01CEL02 FOR EXADATA STORAGE SERVER ROOT FILESYSTEM FREE SPACE Filesystem Size Used Avail Use% Mounted on /dev/md6 9.9G 8.5G 884M 91% / |
Status on dm01db02: PASS => Oracle RAC Communication is using RDS protocol on Infiniband Network |
rds |
Status on dm01db02: PASS => InfiniBand is the Private Network for Oracle Clusterware Communication |
bondib0 = InfiniBand |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => All storage server disk controllers use writeback cache |
DATA FROM DM01CEL01 FOR VERIFY STORAGE SERVER DISK CONTROLLERS USE WRITEBACK CACHE Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU DATA FROM DM01CEL02 FOR VERIFY STORAGE SERVER DISK CONTROLLERS USE WRITEBACK CACHE |
Status on dm01cel03, dm01cel02, dm01cel01: FAIL => Storage Server alerts are not configured to be sent via email |
DATA FROM DM01CEL01 FOR CONFIGURE STORAGE SERVER ALERTS TO BE SENT VIA EMAIL DATA FROM DM01CEL02 FOR CONFIGURE STORAGE SERVER ALERTS TO BE SENT VIA EMAIL DATA FROM DM01CEL03 FOR CONFIGURE STORAGE SERVER ALERTS TO BE SENT VIA EMAIL |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => No celldisks have status of predictive failure |
DATA FROM DM01CEL01 FOR EXADATA CELLDISK PREDICTIVE FAILURES NOTE: Celldisk Name, Status, Size Attributes CD_00_dm01cel01 normal 1832.59375G CD_01_dm01cel01 normal 1832.59375G CD_02_dm01cel01 not present 1861.703125G CD_03_dm01cel01 normal 1861.703125G CD_04_dm01cel01 normal 1861.703125G CD_05_dm01cel01 normal 1861.703125G CD_06_dm01cel01 normal 1861.703125G CD_07_dm01cel01 normal 1861.703125G CD_08_dm01cel01 normal 1861.703125G CD_09_dm01cel01 normal 1861.703125G CD_10_dm01cel01 normal 1861.703125G CD_11_dm01cel01 normal 1861.703125G FD_00_dm01cel01 normal 22.875G |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => RAID controller version matches on all storage servers |
DATA FROM DM01CEL01 FOR RAID CONTROLLER VERSION ON STORAGE SERVERS NOTE: Look for FW Package Build Adapter #0 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9261-8i Serial No : SV10914745 FW Package Build: 12.12.0-0079 Mfg. Data ================ Mfg. Date : 03/02/11 |
Success Factor | DBMACHINE X2-2 AND X2-8 AUDIT CHECKS |
Recommendation | The RBMS version for X2-2 is expected to be 11.2.0.2 or higher. |
Needs attention on | - |
Passed on | MCSDB |
Status on MCSDB: PASS => RDBMS Version is 11.2.0.2 or higher as expected |
DATA FOR MCSDB FOR RDBMS VERSION RDBMS Version = 11.2.0.3.0 |
Status on MCSDB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCSDB FOR LOCALLY MANAGED TABLESPACES SYSTEM LOCAL SYSAUX LOCAL UNDOTBS1 LOCAL TEMP LOCAL UNDOTBS2 LOCAL USERS LOCAL MCSDW LOCAL MCSODS LOCAL IDX_MCSODS LOCAL IDX_MCSDW LOCAL MCSSTG LOCAL MCSAPP LOCAL INFO_BASE LOCAL INFO_DATA LOCAL |
Status on dm01db01: PASS => ASM Version is 11.2.0.2 or higher as expected |
DATA FROM DM01DB01 - MCSDB DATABASE - ASM VERSION asm instance version = 11.2.0.3.0 |
Status on dm01db02: PASS => ASM Version is 11.2.0.2 or higher as expected |
asm instance version = 11.2.0.3.0 |
Status on MCSDB1: FAIL => Database parameter _lm_rcvr_hang_allow_time is NOT set to the recommended value |
_lm_rcvr_hang_allow_time = 70 |
Status on MCSDB2: FAIL => Database parameter _lm_rcvr_hang_allow_time is NOT set to the recommended value |
_lm_rcvr_hang_allow_time = 70 |
Status on MCSDB1: FAIL => Database parameter _kill_diagnostics_timeout is not set to recommended value |
_kill_diagnostics_timeout = 60 |
Status on MCSDB2: FAIL => Database parameter _kill_diagnostics_timeout is not set to recommended value |
_kill_diagnostics_timeout = 60 |
Status on dm01cel03, dm01cel02, dm01cel01: PASS => No Storage Server Memory (ECC) Errors found. |
DATA FROM DM01CEL01 FOR VERIFY THERE ARE NO STORAGE SERVER MEMORY (ECC) ERRORS NOTE: No output means no errors were found DATA FROM DM01CEL02 FOR VERIFY THERE ARE NO STORAGE SERVER MEMORY (ECC) ERRORS NOTE: No output means no errors were found |
Status on MCSDB1: FAIL => Database parameter DB_BLOCK_CHECKSUM is NOT set to recommended value |
MCSDB1.db_block_checksum = typical |
Status on MCSDB2: FAIL => Database parameter DB_BLOCK_CHECKSUM is NOT set to recommended value |
MCSDB2.db_block_checksum = typical |
Status on dm01db02: PASS => Database server InfiniBand network is in "connected" mode. |
/sys/class/net/ib0/mode:connected /sys/class/net/ib1/mode:connected |
Please compare these versions against Database Machine and Exadata Storage Server 11g Release 2 (11.2) Supported Versions (Doc ID 888828.1) in MyOracle Support
Clusterware and RDBMS software version dm01db01.CRS_ACTIVE_VERSION = 11.2.0.3.0 dm01db01.MCSDB.INSTANCE_VERSION = 112030 Clusterware home(/u01/app/11.2.0.3/grid) patch inventory Patch 14275572 : applied on Wed Nov 28 10:45:33 CST 2012 Patch 14307915 : applied on Wed Nov 28 10:46:19 CST 2012 Patch 14474780 : applied on Wed Nov 28 10:44:21 CST 2012 Patch description: "QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14275572)" Patch description: "QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14474780)" Patch description: "QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14307915)" Exadata Server software version version:11.2.3.2.0.120713 Infiniband HCA firmware version Firmware version: 2.7.8130 OpenFabrics Enterprise Distribution (OFED) Software version 1.5.1 1.5.1 1.5.1 1.5.1 1.5.1 Operating system and Kernel version Red Hat Enterprise Linux Server release 5.8 (Tikanga) kernel=2.6.32-400.1.1.el5uek
RDBMS home(/u01/app/oracle/product/11.2.0.3/dbhome_1) patch inventory Patch 14275572 : applied on Wed Nov 28 11:23:12 CST 2012 Patch 14307915 : applied on Wed Nov 28 11:23:28 CST 2012 Patch 14474780 : applied on Wed Nov 28 11:22:20 CST 2012 Patch description: "QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14275572)" Patch description: "QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14474780)" Patch description: "QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14307915)"
Clusterware and RDBMS software version dm01db02.CRS_ACTIVE_VERSION = 11.2.0.3.0 dm01db02.MCSDB.INSTANCE_VERSION = 112030 Clusterware home(/u01/app/11.2.0.3/grid) patch inventory Patch 14275572 : applied on Wed Nov 28 10:48:03 CST 2012 Patch 14307915 : applied on Wed Nov 28 10:48:50 CST 2012 Patch 14474780 : applied on Wed Nov 28 10:46:50 CST 2012 Patch description: "QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14275572)" Patch description: "QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14474780)" Patch description: "QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14307915)" Exadata Server software version version:11.2.3.2.0.120713 Infiniband HCA firmware version Firmware version: 2.7.8130 OpenFabrics Enterprise Distribution (OFED) Software version 1.5.1 1.5.1 1.5.1 1.5.1 1.5.1 Operating system and Kernel version Red Hat Enterprise Linux Server release 5.8 (Tikanga) kernel=2.6.32-400.1.1.el5uek
RDBMS home(/u01/app/oracle/product/11.2.0.3/dbhome_1) patch inventory Patch 14275572 : applied on Wed Nov 28 11:23:15 CST 2012 Patch 14307915 : applied on Wed Nov 28 11:23:32 CST 2012 Patch 14474780 : applied on Wed Nov 28 11:22:21 CST 2012 Patch description: "QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14275572)" Patch description: "QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14474780)" Patch description: "QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 - 11.2.0.3.11) : (14307915)"
Exadata Server software version version:11.2.3.2.0.120713 Infiniband HCA firmware version Firmware version: 2.7.8130 OpenFabrics Enterprise Distribution (OFED) Software version 1.5.1 Operating system and Kernel version Red Hat Enterprise Linux Server release 5.8 (Tikanga) kernel=2.6.32-400.1.1.el5uek
Exadata Server software version version:11.2.3.2.0.120713 Infiniband HCA firmware version Firmware version: 2.7.8130 OpenFabrics Enterprise Distribution (OFED) Software version 1.5.1 Operating system and Kernel version Red Hat Enterprise Linux Server release 5.8 (Tikanga) kernel=2.6.32-400.1.1.el5uek
Exadata Server software version version:11.2.3.2.0.120713 Infiniband HCA firmware version Firmware version: 2.7.8130 OpenFabrics Enterprise Distribution (OFED) Software version 1.5.1 Operating system and Kernel version Red Hat Enterprise Linux Server release 5.8 (Tikanga) kernel=2.6.32-400.1.1.el5uek
skipping Infiniband switch HOSTNAME configuration (checkid:- 9AD56124DDFE9FCCE040E50A1EC038A6) on dm01sw-ib3 because s_sysconfig_network_dm01sw-ib3.out not found skipping Infiniband switch HOSTNAME configuration (checkid:- 9AD56124DDFE9FCCE040E50A1EC038A6) on dm01sw-ib2 because s_sysconfig_network_dm01sw-ib2.out not found skipping Infiniband Switch NTP configuration (checkid:- 9AD59DE0898D0513E040E50A1EC03EEA) on dm01sw-ib3 because s_ntp_dm01sw-ib3.out not found skipping Infiniband Switch NTP configuration (checkid:- 9AD59DE0898D0513E040E50A1EC03EEA) on dm01sw-ib2 because s_ntp_dm01sw-ib2.out not found skipping Infiniband switch sminfo_polling_timeout configuration (checkid:- 9AD8CC2B50B63DEBE040E50A1EC0529A) on dm01sw-ib3 because s_opensm_dm01sw-ib3.out not found skipping Infiniband switch sminfo_polling_timeout configuration (checkid:- 9AD8CC2B50B63DEBE040E50A1EC0529A) on dm01sw-ib2 because s_opensm_dm01sw-ib2.out not found skipping Infiniband switch routing_engine configuration (checkid:- 9AD8F72CFE0AC95BE040E50A1EC050D0) on dm01sw-ib3 because s_opensm_dm01sw-ib3.out not found skipping Infiniband switch routing_engine configuration (checkid:- 9AD8F72CFE0AC95BE040E50A1EC050D0) on dm01sw-ib2 because s_opensm_dm01sw-ib2.out not found skipping sm_priority configuration on Infiniband switch (checkid:- 9AD95A48A426E029E040E50A1EC062A1) on dm01sw-ib3 because s_sm_priority_status_dm01sw-ib3.out not found skipping sm_priority configuration on Infiniband switch (checkid:- 9AD95A48A426E029E040E50A1EC062A1) on dm01sw-ib2 because s_sm_priority_status_dm01sw-ib2.out not found skipping Infiniband switch log_flags configuration (checkid:- 9ADA623709086DC5E040E50A1EC0168D) on dm01sw-ib3 because s_opensm_dm01sw-ib3.out not found skipping Infiniband switch log_flags configuration (checkid:- 9ADA623709086DC5E040E50A1EC0168D) on dm01sw-ib2 because s_opensm_dm01sw-ib2.out not found skipping Infiniband subnet manager status (checkid:- 9ADA9729FCD46EBBE040E50A1EC02350) on dm01sw-ib3 because s_opensmd_status_dm01sw-ib3.out not found skipping Infiniband subnet manager status (checkid:- 9ADA9729FCD46EBBE040E50A1EC02350) on dm01sw-ib2 because s_opensmd_status_dm01sw-ib2.out not found skipping Infiniband switch controlled_handover configuration (checkid:- 9ADAAD73FF532FE4E040E50A1EC0284E) on dm01sw-ib3 because s_opensm_dm01sw-ib3.out not found skipping Infiniband switch controlled_handover configuration (checkid:- 9ADAAD73FF532FE4E040E50A1EC0284E) on dm01sw-ib2 because s_opensm_dm01sw-ib2.out not found skipping Infiniband switch polling_retry_number configuration (checkid:- 9ADAAF3071391E94E040E50A1EC028AF) on dm01sw-ib3 because s_opensm_dm01sw-ib3.out not found skipping Infiniband switch polling_retry_number configuration (checkid:- 9ADAAF3071391E94E040E50A1EC028AF) on dm01sw-ib2 because s_opensm_dm01sw-ib2.out not found skipping Switch firmware version (checkid:- B0A0A6141D1A39CCE0431EC0E50AB237) on dm01sw-ib3 because s_nm2version_dm01sw-ib3.out not found skipping Switch firmware version (checkid:- B0A0A6141D1A39CCE0431EC0E50AB237) on dm01sw-ib2 because s_nm2version_dm01sw-ib2.out not found skipping Hostname in /etc/hosts (checkid:- B0A4363CC03E5EA3E0431EC0E50A3489) on dm01sw-ib3 because s_etc_hostname_dm01sw-ib3.out not found skipping Hostname in /etc/hosts (checkid:- B0A4363CC03E5EA3E0431EC0E50A3489) on dm01sw-ib2 because s_etc_hostname_dm01sw-ib2.out not found skipping Verify average ping times to DNS nameserver (checkid:- B81546A46C376C14E0431EC0E50A826D) on dm01sw-ib3 because s_dns_ping_time_dm01sw-ib3.out not found skipping Verify average ping times to DNS nameserver (checkid:- B81546A46C376C14E0431EC0E50A826D) on dm01sw-ib2 because s_dns_ping_time_dm01sw-ib2.out not found