Oracle数据库数据恢复、性能优化

找回密码
注册
搜索
热搜: 活动 交友 discuz
发新帖

2135

积分

502

好友

184

主题
1#
发表于 2012-3-23 22:12:12 | 查看: 7174| 回复: 0
TimothyJ. Herring, Boise Inc.
Abstract
Thisis an introduction to Oracle’s LogMiner; to acquaint DBAs with thistool when they may have never made use of its capabilities.  Thereare times, when as a DBA armed with the multitude of Oraclestrategies for pulling a rabbit out of your hat, options are few. LogMiner is an Oracle utility that puts the magic in your hands. Itwill allow you to pull the bacon out of the fire when rolling thedatabase back may seem like your only option.
Introduction
Attoo many conferences there are no sessions on LogMiner.  LogMiner isyour window into the redo logs, to recreate transactions or undodisaster.  In direct or indirect ways, mining the redo is the magicof Oracle streams, the standby database and flashback technologies.
LogMinerwas introduced in Oracle 8i and has seen many improvements in 9i, 10gand 11g.  Having a basic understanding
of LogMiner, setting upthe environment, using the packages, simple searching strategies andappropriate use of tool will enable the DBA to add yet anotherutility to their bag of tricks.

Theonline redo logs and archived logs are the source of LogMiner’smagic.  When a change occurs in the database, entries are writteninto the log buffer.  The log writer process (LGWR) writes the logbuffer to the online redo logs.  These in turn are written to thearchive logs, if the database is in ARCHIVELOG mode, by the archiverprocess (ARC0).  The redo logs and archived logs contain redo andundo information that LogMiner can translate into a usable form forthe DBA.
BasicFlow of a Change into the REDO and Archive Logs
Beforediving into the use of LogMiner, the DBA should be familiar with thebasics of Oracle backup and recovery.  Familiarity with archive logmode will add to your ability to find the SQL you are looking for.
Theobjectives of this paper are to layout how to set up and useLogMiner; cover some basic searching strategies, and to addressLogMiner’s appropriate use.
SettingUp Logminer
Thebasics of LogMiner exemplify the “KISS” method,  Keep It Short &Simple.  With a minimum number of commands you can make use ofLogMiner.  A few definitions are in order:
  • Source        Database – The database that produces the redo log files you want        to analyze.
  • Mining        Database – The database LogMiner uses when you perform the        analysis.
  • LogMiner        Dictionary – Used to provide object attributes instead of internal        object IDs for tables and columns.  The dictionary can be a flat        file (8i), dictionary extracted to redo logs(9i), online dictionary        catalog(9i).
  • REDO        Log Files – Contain the changes to the database or the redo        database dictionary.
Prerequisitesfor using LogMiner
Thereare several prerequisite conditions for the use of LogMiner,archivelog mode must be enabled, a utl_file_directory must be set,the LogMiner dictionary should be enabled / specified andsupplemental logging should be considered.
  • The        database must be in archivelog mode.  To check if the database is in        archivelog mode issue this statement:
SQL>archive log list;
Databaselog mode              Archive Mode
Automaticarchival             Enabled
Archivedestination            /app/oracle/admin/BPRD/arch/BPRD
Oldestonline log sequence     50856
Nextlog sequence to archive   50863
Currentlog sequence           50863
  • If        you are using a flat file dictionary, the utl_file directory must be        specified and set to a valid directory.  To check issue this        statement:


           
SQL>show parameter utl_file_dir;
NAME                         TYPE        VALUE
---------------------------------------- ------------------------------
utl_file_dir                 string      /app/oracle/admin/BPRD/logs

Ifthe parameter is not set, change it in the init.ora and set it in thespfile – this will require a database bounce.


SQL>alter system set utl_file_dir=’/app/oracle/admin/BPRD/logs’scope=both;


  • The        LogMiner dictionary must be enabled or specified.  As of Oracle 9i,        you have three choices for the LogMiner dictionary – online, redo        or flat file.  Since this paper deals specifically with LogMiner        Basics, using KISS, we will only address creation of a flat file        dictionary.  Perform this step if there is no existing dictionary        file, objects have changed or when new objects are added (an        existing LogMiner dictionary has gone “stale”).
Thedictionary file must be created from the same database as the redologs to be analyzed.
Thesource must be the same platform, have the same block size andcharacter set as the analyzing instance, (if you wish to analyze in adifferent instance). To create the dictionary file, connect as SYS,execute the build package:


SQL>EXECUTE dbms_logmnr_d.build('lm_BPRD_logdict.ora','/app/oracle/admin/BPRD/logs');
                                     dictionary name         location(utl_file_dir)

  • Supplemental        Logging Note
    As of Oracle 10g, there is a note in the setup of        LogMiner that states, “You must enable supplemental logging prior        to generating log files that will analyzed by LogMiner.”  To see        if supplemental logging is enabled, issue this statement:


SQL>select supplemental_log_data_min from v$database;
SUPPLEME
--------
NO

Toturn on minimal supplemental logging, issue this statement:

SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Thisadditional overhead may not be necessary:
FromMetalink Note 291686.1: “In release 9.2 and 10g By default, OracleDatabase does not provide any supplemental logging, which means thatthe following LogMiner features are NOT supported by default:”
-Index clusters, chained rows, and migrated rows
-Direct-path inserts (also require that ARCHIVELOG mode be enabled)
-Extracting the LogMiner dictionary into the redo log files
-DDL tracking
-Generating SQL_REDO and SQL_UNDO with identification key information
-LONG and LOB data types
Notusing supplemental logging comes with a sacrifice.  This is thesimplest use of LogMiner, so we will skip this logging.
UsingLogminer
Thereare only a few simple steps to use LogMiner:
  • Specify        the REDO logs
  • Start        the analysis
  • Query        the logs
  • Stop        the analysis
TheLogs to be Mined
Ifthe REDO logs needed for analysis have been bundled up by RMAN or athird party tool such as BMC’s SQL-Backtrack, restore the .ARC logsthat have been expired out of the catalog ( for SQL-Backtrack ) tothe analysis directory of your choice.
SpecifyLog Files for Analysis
Youcan specify online or archived log files.  You identify the logs bymaking calls to the dbms_logmnr package.  To create a new list,disregarding files supplied before, supply the NEW argument.  You usethis call to add one file to the list.
Connectas SYS
Fora single new file, or a new set of files:

SQL>EXECUTEdbms_logmnr.add_logfile('/app/oracle/admin/BPRD/arch/BPRD_3125.ARC',dbms_logmnr.NEW);

Toadd additional files:   (this call can be repeated for additionallogs)

SQL>EXECUTEdbms_logmnr.add_logfile('/app/oracle/admin/BPRD/arch/BPRD_3126.ARC',dbms_logmnr.ADDFILE);

Toreview the log files to be analyzed run this query:

SQL>SELECT db_name, thread_sqn, filename FROM v$logmnr_logs;

Startingthe Analysis
StartingLogMiner is a simple call to a stored package.  Connect again as SYS:

SQL>EXECUTEdbms_logmnr.start_logmnr  ( dictfilename=>,'/app/oracle/admin/BPRD/logs/lm_BPRD_logdict.ora' );

Queryingthe logs
Thecontents of the log files are now available to analyze in a view,v$logmnr_contents.  Connect as SYS:
Confirmyou set up LogMiner correctly by running this query:

SQL>SELECT seg_name, operation FROM v$logmnr_contents WHERE rownum <=10;

Oneway to review the results is by running this query:

SQL>set linesize 2000
SQL>column sql_redo format a200
SQL>Set trimspool on
SQL>SELECT username, sql_redo, sql_undo  FROM v$logmnr_contents ORDER BYtimestamp;

Seethe next section on search strategies for ideas on improving yoursearch.
CleaningUp
StoppingLogMiner frees up system resources by closing down LogMiner.  Connectas SYS:

SQL>EXECUTE dbms_logmnr.end_logmnr;

SearchStrategies
Rememberthat this is the most basic of information on the use of LogMiner. With supplemental logging turned on, the LogMiner View in the GUI,the use of this utility has become much friendlier.  However, with aminimum of commands, you can reach into the REDO logs and “magically”bring transactions back from the dead.  Here are some ideas to makethat easier:
Prepwork can narrow your search
Ifyou have an idea of when a change occurred, query the log historyview to narrow the search for the correct REDO log.

select* from v$loghist;

THREAD#        SEQUENCE#        FIRST_CHANGE#        FIRST_TIME        SWITCH_CHANGE#
1        50865        5435667712681        2/25/200810:18:10.000 PM        5435667746882
1        50866        5435667746882        2/25/200810:21:00.000 PM        5435667783764
1        50867        5435667783764        2/25/200810:23:52.000 PM        5435667831236
1        50868        5435667831236        2/25/200810:26:47.000 PM        5435667864105
1        50869        5435667864105        2/25/200810:27:45.000 PM        5435668524927
1        50870        5435668524927        2/25/200811:03:58.000 PM        5435668818385
1        50871        5435668818385        2/25/200811:23:17.000 PM        5435669324582

Howto more successfully search
Youcan quickly narrow your searches with knowledge of what you arelooking for.  If you have the username, know the date/time range thetransaction fell in, know the table that was involved, your query canbe constructed to get right at it:
Inthis example we have all 3 elements – username, timestamp range andwe are looking for the table name in the REDO:

SELECTto_char(scn,'9999999999999'),to_char(timestamp,'DD-MON-YYYYHH24:MI'), sql_redo
FROMv$logmnr_contents
WHEREusername='ELABORPROD'
  andtimestamp > to_date('05/03/2007 10:20','MM/DD/YYYY HH:MI')
andtimestamp < to_date('05/03/2007 10:30','MM/DD/YYYY HH:MI')
andsql_redo LIKE '%PAYCODEDEF_TAB%'
ORDERBY timestamp

Thetrick is to use small timeslices if you have an approximate time ofthe incident.  Once you locate some of the SQL, use the SCN to getall of the SQL in the transaction.  Reading the REDO log takes time,have patience.  Verify your results with the users if applicable.
Hintsfor a Clean Spool File
Setyour SQL-Plus parameters to:
  • Set        linesize 2000        Give it space
  • Set        heading off        Not needed in output
  • Set        pagesize 0        Not needed
  • Set        trimspool on        Get rid of whitespace
ALong Example ( using supplemental logging, and online dictionary)
Createda test table by copying the PS_VENDOR_LOC  table toTIM2.PS_VENDOR_LOC.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA  (PRIMARY KEY, UNIQUEINDEX) COLUMNS;
Databasealtered.

SQL>select count(*) from tim2.ps_vendor_loc;
  COUNT(*)
----------
    315988

SQL>DELETE FROM tim2.ps_vendor_loc  where vndr_loc LIKE '%0146' ;
14rows deleted.

SQL>commit;
Commitcomplete.

SQL>select count(*) from tim2.ps_vendor_loc;
  COUNT(*)
----------
    315974

Dida log switch. (Necessary because we have no archive logs.)
Adda new file, or a new set of files for analysis:
SQL> EXECUTE dbms_logmnr.add_logfile  (  '/db/FTST/log05a.dbf' ,dbms_logmnr.NEW);
PL/SQLprocedure successfully completed.

Toreview the log files to be analyzed run this query:
SQL>SELECT db_name, thread_sqn, filename FROM v$logmnr_logs;
DB_NAME        THREAD_SQN        FILENAME
--------        ----------        ----------------------------------
FTST           286                /db/FTST/log05a.dbf

Startedthe analysis - this loads the contents view ( Using the 9i version -online catalog).
SQL>EXECUTE dbms_logmnr.start_logmnr  (options=>dbms_logmnr.dict_from_online_catalog );
PL/SQLprocedure successfully completed.

ConfirmedI set up LogMiner correctly by running this query:
SQL>SELECT seg_name, operation FROM v$logmnr_contents WHERE rownum <=10;

Analysis:
SinceI knew the the owner I searched the contents with a where clause.  
SQL>set linesize 2000
SQL>set heading off
SQL>set pagesize 1000

SQL>SELECT sql_undo FROM v$logmnr_contents  where sql_redo LIKE '%TIM2%'ORDER BY timestamp;
Thisreturned 14 rows. Bingo!  Correct number of rows found.

StopLogMiner and frees up system resources.
SQL>EXECUTE dbms_logmnr.end_logmnr;

AppropriateUse
LogMineris useful for identifying and undoing logical corruption. It enablesyou to determine when a logical corruption began, so you can performgranular logical recovery, undoing specific transactions.
  • Transactions        that can be “undone”.  Be aware of the risk that changes        subsequent to the transaction may have occurred.  If your database        enforces referential integrity within the database (why wouldn’t        it?), the risk is mitigated.  Especially in the newer releases of        Oracle and LogMiner.  If your database enforces integrity in the        application, knowledge of the application and the relationships the        transaction may have with other tables is vital.
  • If        the corruption occurred after the world has moved on, LogMiner may        give you the option of not needing to roll the database back to a        point in time.  Use LogMiner to identify all pieces of the        corrupting transaction(s) and “undo” them.
  • Use        LogMiner to sleuth.  So many of the applications companies use today        are not grown in house.  We may think we know what is going on, but        with LogMiner we can dig into the details to understand what is        really happening.  This can reveal tuning opportunities, and shed        light on possible process improvement.
    In the case of a        “disaster”, when others are “sure” of what was involved;        LogMiner can show you what did happen.
  • Recovery        of a database that requires rolling back to a point in time, can now        be rolled forward with recovered good transactions.
  • Using        LogMiner as an audit tool.  Digging into the redo, allows the DBAs        and auditors a window into database activity.  The Who, What, When        of a transaction can be identified and documented.
  • To        undo a transaction, why not use Flashback Query?  You may not have        the value in the undo, if the undo_retention parameter is set for        too short a period.
  • The        possibilities are only limited by your dexterity and imagination in        the use of LogMiner.
WhenLogMiner Saved the Day
Severaltimes in the past 8 years, LogMiner has saved the day, here are acouple of examples:
Lossof an Application Module – Oops We Hit Delete
OnMonday afternoon, February 21, 2005, one of our mills accidentallydeleted the entire purchasing module of the Maximo application.  Thedevelopers and analysts were absolutely certain only 6 tables wereinvolved. On Tuesday the 22nd the DBAs were asked to bringthese tables back from an export taken early on Sunday the 20th.Two days had passed since this export had been taken.  Theapplication and database were not shutdown or users shut out.  Aftermining the logs for the redo information, the DBAs identified that 7tables were involved, 7,896 rows had been deleted.  LogMiner was usedto construct the undo transactions, and the data was reapplied to theapplication.  Since the module was out of commission, no data neededto be synchronized and transactions that had been logged in otherportions of the application were not lost with a database recovery to an earlier point in time.  The integrity of the application wasmaintained, by not using the export files, which would have lostSunday and Monday’s purchasing transactions.

ICan’t Believe We Updated That Many Rows?!
InAugust of 2007, some SQL was run to change two banking account codesin our finance database.  This was run in test, and verified. Unfortunately, only expected results were verified.  The code was runin production on a Tuesday at noon.  Wednesday afternoon theaccountant and developer were scrambling to understand what was wrongin the banking module.  Our undo retention is set for four hours. Too much time had passed to flashback the transaction. No logs werekept, or counts retained.  We did have the exact statement that wasrun.  Using LogMiner, it was discovered that 19,000 rows had beenupdated.  A flaw was discovered in the SQL, and upon further review,it was clear they had intended to only update about 6,000 rows. LogMiner was used to undo the transaction, and the correct SQL wasapplied.  A good deal of time was spent verifying that the columnsupdated had not changed in the 28 hours since the bad transaction wasrun. It was very fortunate that the accountant was very familiar withthe data.

LogMinerFeature Improvements
Yourbest source of information on LogMiner, is Oracle’s DatabaseUtilities documentation.  Here is a brief listing of features addedto LogMiner in the releases of Oracle subsequent to Oracle 8i:
Oracle9i LogMiner New Features
  • A        LogMiner Viewer GUI in addition to the command line interface.
  • The        ability to translate DML statements associated with clusters
  • Support        for DDL statements
  • Extracting        the dictionary to the redo logs or an online dictionary
  • The        ability to detect a stale dictionary
  • The        ability to skip redo log corruption
  • The        ability to display only committed transactions
Oracle10g LogMiner New Features
  • New        procedure -   DBMS_LOGMNR.REMOVE_LOGFILE(), replaces the REMOVEFILE        option.
  • NO_ROWID_IN_STMT        option for START_LOGMNR, filters out ROWID from undo and redo        statements.
  • Supplemental        logging is enhanced with options to log – FOREIGN KEY and ALL.
  • At        the table level, identification of key logging is supported.
Oracle11g LogMiner New Features
  • LogMiner        now supports XML data types.
  • LogMiner        Viewer GUI has been enhanced.
Conclusion
Inmy use of LogMiner, I have barely scratched the surface of itscapabilities.  I have successfully recovered several of our sitesusing the 8i features of LogMiner. It truly can, “pull your baconout of the fire” when it seems like you won’t be able to bringthe data back.  Not only a useful recovery tool, it can prove whattables are at play in a “lost” transaction or being used by anapplication. . The beauty of this utility is, even in its most basicform, it delivers.
Reference
Oracle8i “Enterprise DBA Part 1B: Backup and Recovery Workshop”
Oracle®Database Utilities 10g Release 2 (10.2) Part Number B14215-0, Chapter17
Oracle®Database Utilities 11g Release 1 (11.1) Part Number B28319-02,Chapter 18
OracleTechnology Network, Oracle Magazine; Arup Nanda, “Mining for Clues”
Oracle9i New Features; Robert G. Freeman, Oracle Press: ISBN0-07-222385-5
WhitePaper: LogMiner Utility Release 8.1.x - 10g, Metalink Note: 291686.1
ExploringOracle vol  7 Num 10 October 2002  pages 6 - 9
ExploringOracle vol  7 Num 11 November 2002
Aboutthe Author
TimHerring is a lead DBA with Boise Inc.  He graduated in 1991 fromCalifornia State University Chico with a Bachelor of Science inBusiness, concentration in MIS / Databases.  He spent seven years asan application developer using PowerBuilder in an Oracle environment. For two years he acted as a DBA / developer responsible for VBdevelopment on SYBASE, SQL Server 6 and Oracle 7. For the past nineyears he has been working as an Oracle DBA on Oracle 7.33, 8i, 9i,and 10g.  He has been a member of IOUG for the past six years.  Hepresented a 30 minute “Quick Tip” at Collaborate07 on CleaningUp Replication in a Cloned Database.
SpecialThanks
Iwish to thank Craig Shallahamer, Gaja Krishna Vaidyanatha, RichNiemiec, and Kirti Deshpande; the real Oracle gurus, who continue toprovide inspiration, knowledge, guidance and wit to the Oracle DBAcommunity.
下载专业ORACLE数据库恢复工具PRM-DUL  For Oracle http://www.parnassusdata.com/zh-hans/emergency-services

如果自己搞不定可以找诗檀软件专业ORACLE数据库修复团队成员帮您恢复!

诗檀软件专业数据库修复团队

服务热线 : 13764045638  QQ: 47079569   
您需要登录后才可以回帖 登录 | 注册

QQ|手机版|Archiver|Oracle数据库数据恢复、性能优化

GMT+8, 2024-5-17 18:38 , Processed in 0.048672 second(s), 21 queries .

Powered by Discuz! X2.5

© 2001-2012 Comsenz Inc.

回顶部
TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569