Monday, December 13, 2010

COMPILE INVALID OBJECTS


Applying Patches can create invalid objects. To get a quick count of the number of existing invalids (if any), use the following select statement :
SELECT COUNT(*)
FROM DBA_OBJECTS
WHERE STATUS = ‘INVALID’;

For a more detailed query, use the following script :
SELECT OWNER, OBJECT_TYPE, COUNT(*)
FROM DBA_OBJECTS
WHERE STATUS = ‘INVALID’
GROUP BY OWNER, OBJECT_TYPE;

To recompile an individual object, connect to SQL*PLUS as the owner of the object (generally apps) and use one of the following depending on the object type :
alter package compile; (package specification)
alter package compile body;(package body)
alter view compile;(view)
if the object compiles with warning ,use either of the following to see the error s that caoused the warnings:

show errors

OR
select * from user_errors where name = ' ';
Another way to correct invalid objects is to run the adadmin utility as follows:


UNIX OPERATING PLATFORM

1. Log in as APPS User : /

2. Start the adadmin-Utility from the Unix prompt with this command :
adadmin

The utility will then ask you a series of questions.

3. Under the Maintain Applications Database Objects Menu, select Compile APPS schema(s)

This task spawns parallel workers to compile invalid database objects in your APPS schema(s). It uses the same parallel phases as AutoInstall.

Also try running $ORACLE_HOME/rdbms/admin/utlrp.sql ( as sysdba )

Within Applications, there is a script to compile INVALID objects - called ADCOMPSC.pls

Arguments for ADCOMPSC.pls :


1 - Schema to run in
2 - Password for schema
3 - Check errors for objects starting with #3

NOTE: The order in which to compile Invalid Objects in schemas is SYS, SYSTEM, APPS and then all others. APPS_DDL and APPS_ARRAY_DDL should exist in all schema’s. In case of an ORA-1555 error while running adcompsc.pls, restart the script.

The script can be run as followed :
cd $AD_TOP/sql
sqlplus @adcompsc.pls SCHEMA_NAME SCHEMA_PASSWORD %

Example : SQL> @adcompsc.pls apps apps %

After the script completes, check for invalid objects again. If the number has decreased, but invalid objects still exist, run adcompsc.pls again. Keep running adcompsc.pls until number of invalid objects stops decreasing.

If there are any objects still left INVALID, verify them by using the script ‘aderrchk.sql’ to record the remaining INVALID objects. ‘Aderrchk.sql’ uses the same syntax as ‘adcompsc.pls’. This script is also supplied with the Applications. Send the aderrchk.sql to a file using the spool command in sqlplus
e.g. sqlplus x/y @aderrchk.sql SCHEMA_NAME SCHEMA_PASSWORD %

For objects which will not compile, try the following :
select text
from user_source
where name = ‘OBJECTNAME’
and text like ‘%Header%’;

This script will provide the script that creates the packages/recreates the packages.
SQL>@packageheader
SQL>@packagebody

If recreating the package does not make the package valid, analyze the user_errors table to determine the cause of the invalid package :
select text
from user_errors
where name = ‘ ';

Saturday, December 11, 2010

Utility TO CHECK TECHSTACK COMPONENTS-APPS-Database

There is a good utility for checking all the techstack components and there versions present in e-business application. This is applicable to both 11i and R12 environments.

There is a script $FND_TOP/patch/115/bin/txkInventory.pl on apps side which is going to fetch the versions of all components on apps side. This script can be run by giving input to $FND_TOP/patch/115/bin/TXKScript.pl script. TXKScript.pl script takes 2 mandatory arguments, one is the script to run and another is the directory for storing log file and out file. Other then these arguments you must give context file name and location and apps password as well. Also you need to give outfile where it will create report of techstack components.

Login to apps side of your application and source the environment. Run the below command

(appmgr06) appmgr - -bash $ perl $FND_TOP/patch/115/bin/TXKScript.pl -script=$FND_TOP/patch/115/bin/txkInventory.pl -txktop=$APPLTMP -contextfile=$CONTEXT_FILE -appspass=apps -outfile=$OA_HTML/techstack_info.html
*** ALL THE FOLLOWING FILES ARE REQUIRED FOR RESOLVING RUNTIME ERRORS
*** STDOUT = /slot06/appmgr/scmc2mq0comn/rgf/scmc2mq0/TXK/txkInventory_Fri_May_16_04_01_03_2008_stdout.log
Reportfile /slot06/appmgr/scmc2mq0comn/html/techstack_info.html generated successfully.

You can access the above report file using http://(hostname.domain):(port)/OA_HTML/techstack_info.html

The report will look like this.

The report will have following information.

* Summary of environment details
* Inventory of HTTP Server Node
* Inventory of Forms Server Node
* Inventory of Concurrent Processing Server Node
* Inventory of Administration Server Node

Similarly you can check on DB side as well

(oracle06) scmc2mq0 - -bash $ perl $ORACLE_HOME/appsutil/bin/TXKScript.pl -script=$ORACLE_HOME/appsutil/bin/txkInventory.pl -txktop=$ORACLE_HOME/appsutil/temp -contextfile=$CONTEXT_FILE -appspass=apps -outfile=$ORACLE_HOME/appsutil/temp/master_db.html

Upgrade from 11.5.10.2 to R12 Upgrade Oracle Applications from 11.5.10.2 to R12 System Overview

Version 11.5.10.2
Oracle Applications Vision Instance
OS: Redhat Linux update 5
Type: Single Node instance
Applications OS User: applmgrupd
Installed directory: /d01/oracle/singlenode
Database 9i OS User : oracleupd
9i Database Home = /d01/oracle/singlenode/testdb/9.2.0
10g Database Home = /d01/oracle/singlenode/testdb/10.2.0
Instance SID: TEST
Host: linux1

Task – To Upgrade Oracle Applications 11.5.10.2 to R12
Note: Please use vision instance as it is already Multi-Org, Multi-Org is mandatory in R12

Note: The below upgrade is only for testing purposes, not to be implemented in production servers, please do refer metalink for more information.




Upgrade path
• Applications R11.5.6 and below requires to first upgrade to R11.5.10.2/10gR2
• Applications R11.5.7 and up can be directly upgraded to R12
• Database in all releases must have been upgraded to 10gR2 (11.5.9.2 / 11.5.10.2) or should be done during upgrade to R12 (11.5.9.1/11.5.10.1 and below



Here are steps to upgrade Oracle Application 11.5.10.2 to R12, let’s split the activity into 6 steps
1> Please do upgrade the database from 9.2.0.6 to 10.2.0.3, the steps are available in the below link

2> Apply the below patches in your existing 11.5.10.2 environment

Shut down the entire E-Business Suite environment and perform a full backup
4712852 - Minipack 11i.AD.I.4
5753359 - ALPPNR, NO RESPONSE ACTION PROCESSOR CONC PGM ENDS IN ERROR
5467526 - OATM MIGRATION UTILITY: ORA-22853 IN ERROR LOG
5120936 - TUMS for R12: TO DELIVER TUMS UTILITY FOR UPGRADES FROM 11I TO R12
5726010 - AD Preparation Scripts for R12

3> Run rapidwiz to create R12 code tree

4> Apply the below patches in R12 environment
4502962 – R12 Minipack

5> Configuration Phase – R12

6> Apply the online help

2> Apply the below patches in your existing 11.5.10.2 environment
4712852 - Minipack 11i.AD.I.4
5753359 - ALPPNR, NO RESPONSE ACTION PROCESSOR CONC PGM ENDS IN ERROR
*5467526 - OATM MIGRATION UTILITY: ORA-22853 IN ERROR LOG
5120936 - TUMS for R12: TO DELIVER TUMS UTILITY FOR UPGRADES FROM 11I TO R12
5726010 - AD Preparation Scripts for R12

Enable the maintenance mode

adadmin defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt logfile=adadmin.log
menu_option=ENABLE_MAINT_MODE workers=4

-- Apply Patch 4712852
unzip p4712852_11i_LINUX.zip
cd 4712852

-- Takes 5 minutes
adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt logfile=4712852.log
patchtop=/software/upgrades/11.5.10.2toR12/r11/4712852 driver=u4712852.drv workers=4
-- Apply patch u5753359.drv
unzip p5753359_11i_LINUX.zip

-- Takes 2 minutes
adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt logfile=5753359.log
patchtop=/software/upgrades/11.5.10.2toR12/r11/5753359 driver=u5753359.drv workers=4
-- Apply patch u5467526.drv

unzip p5467526_11i_LINUX.zip
cd 5467526
adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt logfile=5467526.log
patchtop=/software/upgrades/11.5.10.2toR12/r11/5467526 driver=u5467526.drv workers=4

-- Apply patch u5120936.drv
unzip p5120936_11i_GENERIC.zip
cd 5120936
adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt logfile=5120936.log
patchtop=/software/upgrades/11.5.10.2toR12/r12/5120936 driver=u5120936.drv workers=4
cd $AD_TOP/patch/115/sql

sqlplus apps/apps @adtums.sql /usr/tmp

-- Review TUMS report in /usr/tmp
-- Apply patch 5726010
unzip p5726010_11i_GENERIC.zip

cd 5726010/

sqlplus apps/apps @adgncons.sql apps apps
# modify the adcrtbsp.sql to specify the path for new tablespaces

sqlplus system/manager @adcrtbsp.sql

su - oracleupd

mkdir -p $ORACLE_HOME/appsutil/admin
cd $ORACLE_HOME/appsutil/admin
cp /software/upgrades/11.5.10.2toR12/r12/5726010/adgrants.sql .

sqlplus '/ as sysdba' @adgrants.sql applsys
-- Run rapidwiz to create R12 code tree
The installation will install R12 applications components and the database home
Select upgrade to Oracle Applications R12
Select upgrade actions screen
select create upgrade file system

enter the parameters required to set up your new environment run rapid install Here are the screen shots for the same













The Installation creates APPL_TOP, COMMON_TOP, INST_TOP and 10g Oracle Home
Move the new environment file into .bash_profile
su - applmgrupd
cd /d01/oracle/apps/apps_st/appl
mv .bash_profile .bash_profile_11.5.10
cat APPSVIS_linux1.env >> /home/applmgrupd/.bash_profile
Now onwards, you are connected to R12 application
-- Apply patch AD.A 4502962
-- takes 6 minutes
unzip p4502962_R12_LINUX.zip
cd 4502962/
adpatch

Please enter the name of your AutoPatch driver file: u4502962.drv
-- Run the American English upgrade patch driver
-- Takes 40 hours
cd $AU_TOP/patch/115/driver

adpatch options=nocopyportion,nogenerateportion

Please enter the name of your AutoPatch driver file: u4440000.drv
Run the NLS upgrade patch driver (conditional)
• Download the NLS Release 12 patch (4440000) for each active language in the system.
• Run each driver (u4440000.drv) with adpatch
• NLS patch driver has the same name as the American English patch driver.
Synchronize NLS and American English product patches (conditional)
• Generate manifest using perl $AD_TOP/bin/adgennls.pl
• Backup manifest file $APPL_TOP/admin/$TWO_TASK/out/adgennls.txt
• http://updates.oracle.com/TransSync
• Don’t check Translation Level Updates
• Upload manifest
• Download and apply patch
-- Configuration Phase
Disable maintenance mode
Run autoconfig

Run rapidwiz to configure and start processes
Sign on to R12 applications check concurrent managers

Run autoconfig
su - applmgrupd
perl $AD_TOP/bin/admkappsutil.pl
su - oracleupd
cd $ORACLE_HOME
cp /d01/oracle/inst/apps/TEST_linux1/admin/out/appsutil.zip .
unzip -o appsutil.zip
cd $ORACLE_HOME/appsutil/scripts/TEST_linux1
sh adautocfg.sh

Run rapidwiz to configure and start processes
-----------------------------------------
rapidwiz - Upgrade to Oracle Applications Release 12 -> Upgrade Action -> Select Configure Upgraded

Release 12 instance
$INST_TOP/admin/VIS_linux1.xml
/d01/oracle/inst/apps/TEST_linux1/appl/admin/TEST_linux1.xml








Note: I got the below error, but all services and the application is running.



Sign on to R12 applications check concurrent managers



Connect as sysadmin/sysadmin




-- Apply online help
cd $AU_TOP/patch/115/driver/
adpatch options=hotpatch,nocopyportion,nogenerateportion
Please enter the name of your AutoPatch driver file : u5051400.drv
-- Gather schema statistics for CBO
Release 12 employs cost-based optimization, which examines FND table statistics to determine the most
efficient access paths and join
methods for executing SQL statements. These statistics are gathered by the FND_STATS process, which
you initiate by running the
Gather Schema Statistics concurrent program.
From your Release 11i APPL_TOP, complete the following steps:
1. Log in to Oracle Applications with the System Administrator responsibility.
2. Navigate to the Submit Request window (Request > Run).
3. Submit the Gather Statistics program. (Schema)

References
1. Upgrading Oracle Applications Note# 289788.1
2. Oracle 10gR2 Database Preparation Guidelines for an E-Business Suite R12 Note#403339.1
3. Oracle applications R11.5.10.2 Maintenance pack Installation Note#316365.1
4. Oracle Applications Upgrade Guide: Release 11i to Release 12 Part# B31566-01
5. Database Initialization Parameters for Oracle Applications Release 12 Note#396009.1
6. Oracle Applications Release Notes Release 12 Note# 405293.1
7. E-Business Suite Release 12 Upgrade Sizing and Best Practices Note# 399362.1
8. Upgrading to R12 Note# 414710.1
9. Maintenance Wizard Note# 215527.1

Friday, December 10, 2010

All Control file lost Recover database from RMAN Backup (ORA-00205: error in identifying control file)

ll Control file lost Recover database from RMAN Backup (ORA-00205: error in identifying control file)
During media failure (single disk crash) the online redo log is usually available so it is possible to recover up to the point of failure. Quite often ,Windows Oracle instance can be started in nomount mode (i.e. spfile is also available).

In this scenario all Control files are lost and it’s not multiplexed. In that case you have no option but to use a backup control file. The recovery needs to be performed from within RMAN, and requires that all logs (archived and current online logs) since the last backup are available. The logs are required because all datafiles must also be restored from backup. The database will then have to be recovered up to the time the control files went missing. This can only be done if all intervening logs are available.


In this example RMAN is not using catalog.

Try to start database and found following error.

d:> sqlplus sys/**** as sysdba;

SQL*Plus: Release 10.2.0.3.0 - Production on Tue Sep 29 11:05:12 2009

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to an idle instance.

ORCL: SQL> startup;
ORACLE instance started.

Total System Global Area 524288000 bytes
Fixed Size 1291576 bytes
Variable Size 306186952 bytes
Database Buffers 213909504 bytes
Redo Buffers 2899968 bytes
ORA-00205: error in identifying control file, check alert log for more info

In Alter log file I found following error:

Tue Sep 29 11:05:18 2009
ALTER DATABASE MOUNT
Tue Sep 29 11:05:18 2009
ORA-00202: control file: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL01.CTL'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.


Complete Recovery by last night RMAN BACKUP set

1. d:\> rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Tue Sep 29 10:19:36 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORCL (not mounted)



2. RMAN> set dbid=1096578125
executing command: SET DBID

****set DBID - get this from the name of the controlfile autobackup.
For example, if autobackup name is CTL_SP_BAK_C-1507972899-20050124-00 the the DBID is 1507972899. This step will not be required if the instance is



3. RMAN> set controlfile autobackup format for device type disk to 'D:\ABC\BACKUP\RMAN\ORCL\cf_%d_%F';

executing command: SET CONTROLFILE AUTOBACKUP FORMAT
using target database control file instead of recovery catalog

**** Verify the format of your backup set



4. RMAN> restore controlfile from autobackup;
Starting restore at 29-SEP-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

recovery area destination: D:\oracle\product\10.2.0\flash_recovery_area
database name (or database unique name) used for search: ORCL
channel ORA_DISK_1: no autobackups found in the recovery area
channel ORA_DISK_1: looking for autobackup on day: 20090929
channel ORA_DISK_1: looking for autobackup on day: 20090928
channel ORA_DISK_1: autobackup found: D:\ABC\BACKUP\RMAN\ORCL\cf_ORCL_c-10965
78125-20090928-03
channel ORA_DISK_1: control file restore from autobackup complete
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL01.CTL
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL02.CTL
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\CONTROL03.CTL
Finished restore at 29-SEP-09

*** Now that control files have been restored, the instance can mount the


5. RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1


6. RMAN> restore database;
Starting restore at 29-SEP-09
Starting implicit crosscheck backup at 29-SEP-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
Crosschecked 7 objects
Finished implicit crosscheck backup at 29-SEP-09

Starting implicit crosscheck copy at 29-SEP-09
using channel ORA_DISK_1
Finished implicit crosscheck copy at 29-SEP-09

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_28\O1_MF_1_48_5D1QDPFJ_.ARC
File Name: D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_28\O1_MF_1_49_5D21CNOB_.ARC
File Name: D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_28\O1_MF_1_50_5D21CPS7_.ARC
File Name: D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_29\O1_MF_1_51_5D38N85L_.ARC
File Name: D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_29\O1_MF_1_52_5D45J1YK_.ARC

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
restoring datafile 00002 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
restoring datafile 00003 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
restoring datafile 00004 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
restoring datafile 00005 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_01.DBF
restoring datafile 00006 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_02.DBF
restoring datafile 00007 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS1_01.DBF
restoring datafile 00008 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS1_02.DBF
restoring datafile 00009 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS2_01.DBF
restoring datafile 00010 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS2_02.DBF
restoring datafile 00011 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS3_01.DBF
restoring datafile 00012 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS3_02.DBF
restoring datafile 00013 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS4_01.DBF
restoring datafile 00014 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS4_02.DBF
restoring datafile 00015 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS5_01.DBF
restoring datafile 00016 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS5_02.DBF
restoring datafile 00017 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS6_01.DBF
restoring datafile 00018 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS6_02.DBF
restoring datafile 00019 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_S_01.DBF
restoring datafile 00020 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_S_02.DBF
restoring datafile 00021 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_SI_01.DBF
restoring datafile 00022 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_SI_02.DBF
restoring datafile 00023 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_HL7_01.DBF

restoring datafile 00024 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_HL7_02.DBF

restoring datafile 00025 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_HL7I_01.DB
F
restoring datafile 00026 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_HL7I_02.DB
F
restoring datafile 00027 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_AUD_01.DBF

restoring datafile 00028 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_AUD_02.DBF

restoring datafile 00029 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_AUDI_01.DB
F
restoring datafile 00030 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_AUDI_02.DB
F
restoring datafile 00031 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_RHAPSODY_0
1.DBF
restoring datafile 00032 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_RHAPSODY_0
2.DBF
restoring datafile 00033 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_OPIS_RPT_0
1.DBF
restoring datafile 00034 to D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\OPIS_OPIS_RPT_0
2.DBF
channel ORA_DISK_1: reading from backup piece D:\ABC\BACKUP\RMAN\ORCL\DB_ORCL
_T698759044_S731_P1
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\ABC\BACKUP\RMAN\ORCL\DB_ORCL_T698759044_S731_P1 tag=FULL BACK
UP
channel ORA_DISK_1: restore complete, elapsed time: 00:01:39
Finished restore at 29-SEP-09

** Database must be recovered because all datafiles have been restored from backup




7. RMAN> recover database;

Starting recover at 29-SEP-09
using channel ORA_DISK_1

starting media recovery


archive log thread 1 sequence 48 to 52 is already on disk as file D:\ORACLE\PRODUCT\10
.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2009_09_29\O1_MF_1_52_5D45J1YK_.ARC
archive log thread 1 sequence 53 is already on disk as file D:\ORACLE\PRODUCT\10
.2.0\ORADATA\ORCL\REDO02.LOG
channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=46
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=47
channel ORA_DISK_1: reading from backup piece D:\ABC\BACKUP\RMAN\ORCL\ARCH_P
OM_1_T698759096_S734_P1
channel ORA_DISK_1: restored backup piece 1
piece handle=D:\ABC\BACKUP\RMAN\ORCL\ARCH_ORCL_1_T698759096_S734_P1 tag=TAG20
090928T114456
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_46_5D475JO8_.ARC thread=1 sequence=46
channel default: deleting archive log(s)
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_46_5D475JO8_.ARC recid=641 stamp=698841715
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_47_5D475K17_.ARC thread=1 sequence=47
channel default: deleting archive log(s)
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_47_5D475K17_.ARC recid=640 stamp=698841713
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_51_5D38N85L_.ARC thread=1 sequence=48 to 51
archive log filename=D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG
\2009_09_29\O1_MF_1_52_5D45J1YK_.ARC thread=1 sequence=52
archive log filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG thread=1 se
quence=53
media recovery complete, elapsed time: 00:00:18
Finished recover at 29-SEP-09



8. RMAN> alter database open;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 09/29/2009 10:42:28
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


-- Recovery completed. The database must be opened with RESETLOGS
-- because a backup control file was used. Can also use
-- "alter database open resetlogs" instead.




9. RMAN> alter database open resetlogs;
database opened




**** Several points are worth emphasizing.*****
1. Recovery using a backup controlfile should be done only if a current control file is
unavailable.
2. All datafiles must be restored from backup. This means the database will need to be
recovered using archived and online redo logs. These MUST be available for recovery until the time of failure.

3. As with any database recovery involving RESETLOGS, take a fresh backup immediately.

4. Technically the above is an example of complete recovery - since all committed
transactions were recovered. However, some references consider this to be incomplete
recovery because the database log sequence had to be reset.
After recovery using a backup controlfile, all temporary files associated with locallymanaged tablespaces are no longer available. You can check that this is so by querying the view V$TEMPFILE - no rows will be returned. Therefore tempfiles must be added (or recreated) before the database is made available for general use. In the case at hand, the tempfile already exists so we merely add it to the temporary tablespace. This can be done using SQLPlus or any tool of your choice:

SQL> alter tablespace temp add tempfile
'D:\oracle_data\datafiles\ORCL\TEMP01.DBF';
Tablespace altered.

Reclaim disk space from database/ Reduce data file size to free disk

My tablespace is fragmented and I want to reduce the size of my dbf files to free up disk. How do I reclaim disk space in Oracle? 
 
Oracle maps the physical files (customer.dbf) to their logical "tablespace" construct and Oracle places objects (tables & indexes) into the tablespace.
Within the tablespace, objects are scattered throughout the tablespace and corresponding datafiles.

Tables, indexes and tablespaces will naturally fragment as a function of update activity and Oracle has many methods for reclaiming disk space and a segment advisor which will recommend when tables and indexes will benefit from a reorganization to free up disk space.

Segments that undergo significant data manipulation language (DML) activity, such as UPDATE and DELETE operations, can become sparsely populated, with chunks of free space within their data blocks. Besides simply wasting space, sparsely populated segments can also lead to poor performance, because operations such as a full table scan will need to scan more blocks than necessary to retrieve data.


Before Oracle Database 10g, you could reclaim the free segment space by dropping the table, re-creating it, and then reloading the data. You could also use the ALTER TABLE MOVE command to move the table to a different tablespace. Both of these processes, however, must occur with the table offline. Online table reorganization is another way to reclaim space, but it requires significant disk space.

So how much DISK space you can claim from your data file (Calculate)?

Let run this script:

SELECT substr(FILE_NAME,1,55),
CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SMALLEST,
CEIL( BLOCKS*&&BLKSIZE/1024/1024) CURRSIZE,
CEIL( BLOCKS*&&BLKSIZE/1024/1024) - CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SAVINGS
FROM DBA_DATA_FILES DBADF, ( SELECT FILE_ID, MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS GROUP BY FILE_ID ) DBAFS
WHERE DBADF.FILE_ID = DBAFS.FILE_ID(+) Order by SAVINGS

You will ask for BOCKSIZE.

You can find database block size from intXXX.ora file.
db_block_size=8192

So in this case enter 8192 as parameter.

The output will be like this:

ORCL: SQL> SELECT substr(FILE_NAME,1,55),
2 CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SMALLEST,
3 CEIL( BLOCKS*&&BLKSIZE/1024/1024) CURRSIZE,
4 CEIL( BLOCKS*&&BLKSIZE/1024/1024) - CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SAVINGS
5 FROM DBA_DATA_FILES DBADF, ( SELECT FILE_ID, MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS GROUP BY FILE_ID ) DBAFS
6 WHERE DBADF.FILE_ID = DBAFS.FILE_ID(+) Order by SAVINGS
7 /
old 2: CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SMALLEST,
new 2: CEIL( (NVL(HWM,1)*8192)/1024/1024 ) SMALLEST,
old 3: CEIL( BLOCKS*&&BLKSIZE/1024/1024) CURRSIZE,
new 3: CEIL( BLOCKS*8192/1024/1024) CURRSIZE,
old 4: CEIL( BLOCKS*&&BLKSIZE/1024/1024) - CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SAVINGS
new 4: CEIL( BLOCKS*8192/1024/1024) - CEIL( (NVL(HWM,1)*8192)/1024/1024 ) SAVINGS

SUBSTR(FILE_NAME,1,55) SMALLEST CURRSIZE SAVINGS
------------------------------------------------------- ---------- ---------- ----------
D:\ ORADATA\ORCL\DATA_AUD_02.DBF 80 80 0
D:\ ORADATA\ORCL\DATA6_02.DBF 200 200 0
D:\ ORADATA\ORCL\DATA1_02.DBF 520 520 0
D:\ ORADATA\ORCL\DATA_S_03.DBF 20 20 0
D:\ ORADATA\ORCL\DATA_AUD_01.DBF 60 60 0
D:\ ORADATA\ORCL\DATA_02.DBF 120 120 0
D:\ ORADATA\ORCL\DATA4_01.DBF 20 20 0
D:\ ORADATA\ORCL\DATA5_02.DBF 100 100 0
D:\ ORADATA\ORCL\DATA_AUDI_01.DBF 17 20 3
D:\ ORADATA\ORCL\SYSAUX01.DBF 197 200 3
D:\ ORADATA\ORCL\DATA_AUDI_02.DBF 16 20 4

SUBSTR(FILE_NAME,1,55) SMALLEST CURRSIZE
------------------------------------------------------- ---------- ---------- ----------
D:\ ORADATA\ORCL\DATA_01.DBF 135 140 5
D:\ ORADATA\ORCL\DATA_SI_01.DBF 13 20 7
D:\ ORADATA\ORCL\DATA6_01.DBF 213 220 7
D:\ ORADATA\ORCL\DATA2_02.DBF 9 20 11
D:\ ORADATA\ORCL\DATA2_01.DBF 8 20 12
D:\ ORADATA\ORCL\DATA4_02.DBF 27 40 13
D:\ ORADATA\ORCL\DATA_S_02.DBF 26 40 14
D:\ ORADATA\ORCL\DATA1_01.DBF 545 560 15
D:\ ORADATA\ORCL\USERS01.DBF 2 19 17
D:\ ORADATA\ORCL\DATA5_01.DBF 102 120 18
D:\ ORADATA\ORCL\DATA_HL7_02.DBF 1 20 19

SUBSTR(FILE_NAME,1,55) SMALLEST CURRSIZE SAVINGS
------------------------------------------------------- ---------- ---------- ----------
D:\ ORADATA\ORCL\DATA_HL7I_02.DBF 1 20 19
D:\ ORADATA\ORCL\DATA3_01.DBF 1 20 19
D:\ ORADATA\ORCL\DATA_HL7_01.DBF 1 20 19
D:\ ORADATA\ORCL\DATA_HL7I_01.DBF 1 20 19
D:\ ORADATA\ORCL\DATA3_02.DBF 1 20 19
D:\ ORADATA\ORCL\DATA_S_01.DBF 20 40 20
D:\ ORADATA\ORCL\DATA_SI_02.DBF 16 40 24

SUBSTR(FILE_NAME,1,55) SMALLEST CURRSIZE SAVINGS
------------------------------------------------------- ---------- ---------- ----------
D:\ ORADATA\ORCL\SYSTEM01.DBF 265 300 35
D:\ ORADATA\ORCL\UNDOTBS01.DBF 34 85 51

Let’s write a dynamic script to resize data file


set pagesize 0
set verify off
set heading off
set feedback off
set linesize 300

prompt Running Shrink_data_&&1..dbsql
spool Shrink_data_&&1..dbsql

SELECT 'ALTER DATABASE DATAFILE '|| ' '''|| substr(FILE_NAME,1,65) ||' '''|| ' RESIZE ' || CEIL(
(NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) ||'M; ' FROM DBA_DATA_FILES DBADF, ( SELECT FILE_ID,MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS GROUP BY FILE_ID ) DBAFS WHERE DBADF.FILE_ID = DBAFS.FILE_ID(+) and CEIL(
BLOCKS*&&BLKSIZE/1024/1024) - CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 )>1

spool off
set echo on
spool Shrink_data_&&1..log

rem @Shrink_data_&&1..dbsql
spool

spool off

exit

You will see all entries of data files in spool output (Shrink_data.log).

For example:
ALTER DATABASE DATAFILE 'D:\ ORADATA\ORCL\SYSTEM01.DBF ' RESIZE 265M;

This is a dynamic script and tested in several databases.

Thursday, December 9, 2010

Calculate the total size of Oracle Database .

 
The following script calculates the storage allocated to an Oracle database, i.e., its overall size. You will need DBA privilege to run this script.

sql>define_editor=ed
sql>select ROUND( ( a.data_size + b.temp_size +
c.redo_size + d.cf_size +
e.bct_size)
/1024/1024/1024
) "total_GB_size"
from
( select SUM(bytes) data_size from v$datafile) a,
( select NVL(sum(bytes),0) temp_size from v$tempfile) b,
( select SUM(bytes) redo_size from v$log) c,
( select SUM(block_size*file_size_blks) cf_size from v$controlfile) d,
( select NVL(bytes,0) bct_size from v$block_change_tracking) e


sql>select a.data_size+b.temp_size+c.redo_size "total_size"
from ( select sum(bytes)/1024/1024 data_size
from dba_data_files )a,
( select nvl(sum(bytes),0)/1024/1024 temp_size
from dba_temp_files ) b,
( select sum(bytes)/1024/1024 redo_size
from sys.v_$log ) c

 The following command calculates the Harddisk space of linux i.e., its overall size. You will need root privilege to run this script.

#fdisk -l | grep Disk

#vi /proc/cpuinfo

# free -m

Tuesday, December 7, 2010

Data Guard configuring

The purpose of this blog entry is to show you how to create manually a physical standby database.

The first will be a basic primary and standby database setup on 2 different machines. In this case, I used VMWARE.

If we afford to lose few minutes of data then this is very good cheap solution for disaster recovery solution.


The Standby Database

• Oracle supplied disaster recovery (DR) solution.
• Operates in 2 modes –manual and automatic.
• No GUI.
• Available since 7.3.4.
• Available with Std Edition one





Step by step document to create manual Standby Database

1 - Prerequisites

• Same Oracle software version must be used for both primary and standby databases. The operating system running on primary and standby locations must be same, but operating system release may not need to be same.
• The Primary Database must run in ARCHIVELOG mode.
• Each primary and standby database must have its own control file.
• If primary and standby database
• s are placed on the same system, initialization parameters must be adjusted correctly.

Preparing Primary Database for Standby Database creation
Configuration in Primary Database


2 Ensure the primary database in ARCHIVELOG mode

Ensure the primary database in ARCHIVELOG mode using following command.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination C:\oracle\ora92\database\archive\ORCLC
Oldest online log sequence 9
Next log sequence to archive 11
Current log sequence 11

3 - Identify the primary database Datafiles
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------

C:\ORACLE\ORADATA\ORCLC\SYSTEM01.DBF
C:\ORACLE\ORADATA\ORCLC\UNDOTBS01.DBF
C:\ORACLE\ORADATA\ORCLC\CWMLITE01.DBF
C:\ORACLE\ORADATA\ORCLC\DRSYS01.DBF
C:\ORACLE\ORADATA\ORCLC\EXAMPLE01.DBF
C:\ORACLE\ORADATA\ORCLC\INDX01.DBF
C:\ORACLE\ORADATA\ORCLC\ODM01.DBF
C:\ORACLE\ORADATA\ORCLC\TOOLS01.DBF
C:\ORACLE\ORADATA\ORCLC\USERS01.DBF
C:\ORACLE\ORADATA\ORCLC\XDB01.DBF

10 rows selected.

4 - Make a cold backup copy of Primary Database
Make a cold backup copy of primary database by performing following steps:

4.1 Shutdown the Primary Database
Issue the following statement to shutdown the primary database.
SQL> shutdown immediate;

4.2 Copy the Datafiles to standby location
Copy the redo log files and Datafiles identified in section 3 to standby location.
Note: Primary Database must be shutdown while copying the files.



5 - Restart the Primary Database

Execute following command to restart the Primary Database.
SQL> startup;


6 - Create Control file for Standby Database
Issue the following command on primary database to create control file for the standby database.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'D:\oracle\oradata\stby\control_sb01.ctl';

Database altered.
The filename for newly created standby control file must be different of current control file of the primary database. Also control file for standby database must be created after the cold backup of Datafiles and redo logs. Copy this standby controlfile to the standby server.




Configuration in Standby Database
7 - Create pfile for standby database from the primary database

Copy the pfile from the primary database to the standby server

8 - Set initialization parameters on physical standby database
Although most of the initialization parameter settings in the text initialization parameter file that you copied from the primary system are also appropriate for the physical standby database, some modifications need to be made. Edit created pfile from primary database.

db_name - Not modified. The same name as the primary database.

compatible - Not modified. The same as the primary database,
control_files - Specify the path name and filename for the standby control file .

log_archive_start - Not modified. The same as the setting for the primary database, TRUE

db_file_name_convert - Specify the location of the primary database datafiles followed by the standby location of the datafiles. This parameter will convert the filename of the primary database datafiles to the filename of the standby datafile filenames. If the standby database is on the same system as the primary database or if the directory structure where the datafiles are located on the standby site is different from the primary site then this parameter is required.

log_file_name_convert - Specify the location of the primary database logs followed by the standby location of the logs. This parameter will convert the filename of the primary database log to the filenames of the standby log. If the standby database is on the same system as the primary database or if the directory structure where the logs are located on the standby site is different from the primary site then this parameter is required.

instance_name - If both primary and standby databases are to reside in the same server, specify a different value for the standby database than the primary database. If standby database will be in a different server, then ignore this.
lock_name_space - - If both primary and standby databases are to reside in the same server, Specify the standby database instance name. Use this parameter when you create the physical standby database on the same system as the primary database. Change the INSTANCE_NAME parameter to a value other than its primary database value, and set this LOCK_NAME_SPACE initialization parameter to the same value that you specified for the standby database INSTANCE_NAME initialization parameter.

Also change the values of the parameters background_dump_dest, core_dump_dest and user_dump_dest to specify location of the standby database.

(Refer Annexure for initialization parameter settings for primary and standby database.)
9 - Create a Window service
If standby database is running on windows system, then oradim utility is used to create windows service. Issue following command from the command prompt window
C:\>oradim -new -sid ssss -intpwd yyyy -startmode manual

10 - Configure tnsnames for standby database in the standby server


11 - Start Physical standby database in standby mode


Start up the stand by database using following commands
C:\>set oracle_sid=ssss

C:\>sqlplus /nolog

SQL*Plus: Release 9.2.0.1.0 - Production on Mon Apr 25 17:13:26 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup pfile='C:\oracle\ora92\database\initstby.ora' nomount;
ORACLE instance started.
Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes

SQL> alter database mount standby database;
Database altered.


12. Now you have to transfer archivelogs from Primary database and apply them to standby database

12.1 create a script in primary server to run under windows task at desired intervals (15 mins or 30 mins etc.), to transfer archivelogs from primary to standby:-

move _arch.cmd
==================
pushd C:\oracle\admin\scripts
set ORACLE_SID=xxx
set ORACLE_HOME=C:\oracle\product\10.2.0\db_1

%ORACLE_HOME%\bin\sqlplus dbo/dbo @move_archivelog.sql

robocopy D:\oracle\backup\arch\roc8 \\172.30.36.13\oracle\backup\arch\roc8 /MIR /COPY:DT /FFT /log:robocopystatus.log



The sql below is to force a new archivelog from primary database:

Apply_archivelog.sql
==================
ALTER SYSTEM SWITCH LOGFILE;
exit;



**** Windows robocopy utility is used above to send archivelogs from arch folder in primary server to arch folder in standby server ****


12.2 apply the archivelogs arriving from primary server to the standby database every 15 mins or 30 mins under a windows task in standby server as follows:-

apply_arch.cmd
==============
pushd C:\oracle\admin\scripts
set ORACLE_SID=ssss
set ORACLE_HOME=C:\oracle\product\10.2.0\db_1
%ORACLE_HOME%\bin\sqlplus /nolog @ Apply_archivelog.sql




Apply_archivelog.sql
===============
connect sys/oracle as sysdba;
set timing on
set echo on
spool apply_archp.log
recover standby database;
AUTO
spool off
exit



License requirements by Oracle:

Standby database doesn’t need Oracle license for 10 days in calendar year.

Failover – In this type of recovery, nodes are arranged in a cluster and share
one disk array. A Failover cluster is a group of systems, bound together into a
common resource pool. In this type of recovery method, the Production node
acts as the primary node. When the primary node fails, one of the surviving
nodes in the cluster acts as the primary node. Solutions like Oracle Failsafe
(included with Oracle Database EE or SE, SE1), or third party vendor solutions
(e.g. Veritas, HP Service Guard, HACMP, Linux HA - Heartbeat) are used to
manage Failover environments. In this type of environment, Oracle permits
licensed Oracle customers to run some Technology Programs on an
unlicensed spare computer for up to a total of ten separate days in any given
calendar year. Once the primary node is repaired, you must switch back to the
primary node. Once the failover period has exceeded ten days, the failover
node must be licensed.

Source: Oracle SOFTWARE INVESTMENT GUIDE


Testing the environment

Once primary and Standby database is configured and transportation log is configured to transfer we need to test it thoroughly.

We can execute some DML and DDL in primary database. After archive logs are applied in Standby database we should able to see those DLL and DML changes in standby database.

To check it Open the database on read only mode.

SQL> alter database open read only;

After the test re-start the database in mount mode. If you OPEN the database you have to recreate the standby database again.
1. SQL> shutdown immediate;
2. SQL>startup mount;



Checking log file in Standby database:

If you check log file you might see error and warning like below.


ORCL: SQL> recover standby database;
ORA-00279: change 239868 generated at 10/20/2010 12:50:02 needed for thread 1
ORA-00289: suggestion :
D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2010_10_20\O1_MF_1_1
00_%U_.ARC
ORA-00280: change 239868 for thread 1 is in sequence #100


Specify log: {=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 240264 generated at 10/20/2010 13:05:02 needed for thread 1
ORA-00289: suggestion :
D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2010_10_20\O1_MF_1_1
01_%U_.ARC
ORA-00280: change 240264 for thread 1 is in sequence #101
ORA-00278: log file
'D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2010_10_20\O1_MF_1_
100_6CY89YTC_.ARC' no longer needed for this recovery

ORA-00308: cannot open archived log
'D:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2010_10_20\O1_MF_1_
101_%U_.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
TOM: SQL> spool off

If you see error like ORA-00308: cannot open archived log.

Monday, December 6, 2010

Reducing datafile size to recover free space – Oracle Database 10g

At time, we want to recover some space from database just to allocate the same to some other tablespace or to return it back to OS disk. This situation arises many times.
And many time we hit with error “ORA-03297: file contains used data beyond requested RESIZE value“.
The concept is simple and many of you must be knowing, but just putting in a simpler words.
Lets take an example of one of the datafile in a database.
Lets see the total free space in a datafile 194.
SQL> select sum(bytes)/1024/1024 from dba_free_space
2  where tablespace_name = ‘APPS_TS_TX_DATA’
3  and file_id = 194;

SUM(BYTES)/1024/1024
——————–
1844.125

Now lets see the distribution for the file.
SQL> select file_id, block_id, blocks, bytes,  ‘Free’ from dba_free_space
2  where tablespace_name = ‘APPS_TS_TX_DATA’
3  and file_id = 194
4  and rownum < 7
5  order by block_id desc;

   FILE_ID   BLOCK_ID     BLOCKS      BYTES ‘FRE
———- ———- ———- ———- —-
194      35001     220992 1810366464 Free
194      13433         16     131072 Free
194      13417         16     131072 Free
194      13401         16     131072 Free
194      13385         16     131072 Free
194      13369         16     131072 Free

We can see that there are so many blocks which are free.  Its divided into chunks of 16 Blocks and each of these chunks are given a block ID. Now we can see that this formating is done until block 35001 and after that there is no partitions. Beyond the block ID 35001, the whole space is available as a single large unit. This is because of high water mark. When a object is created, it will be created physically in the datafile and will be allocated a block. The number of blocks it will be allocated will depend on the parameter “INITIAL EXTENT” which can be given at the time of creating an object. If we dont give this parameter it will take a default value of 16. So 16 block gets assigned when you create any abject, example a table.
You might be wondering that after block 35001, we have all free space and also we have free space at blocks 13401, 13417, 13433 etc. But where are the blocks between 13433 and 35001??
The answer can be found from dba_extents. All the blocks between 13433 and 35001 are occupied by the objects and are not free. That why you are not able to see them in dba_free_space view. But you can find then in dba_extents. So in case of file 194, objects were getting created until block no 35001 (so we can see that block formating till block 35001) and then at later point of time some of the objects got dropped, so the space got freed, but the formating of blocks remain intact (example block IDs which got freed are 13401, 13417, 13433 etc.). This we call it as high water mark for a datafile.
As we saw earlier that we have around 1844.125 MB free space. Can we recover all of them?? I wish .. but no, we cannot. Reason being that, some of the blocks are free “in-between” and there is a fragmentation. To make my point clear, lets try to reduce the file size.
Lets try that !!
This file original size is 2000M

SQL> select bytes/1024/1024 from dba_data_files where file_id = 194;


BYTES/1024/1024
—————
2000

and as per dba_free_space its having 1844.125 MB free space. Lets reduce the file size by 1800 MB and it will definately give error.
SQL> alter database datafile 194 resize 200M;
alter database datafile 194 resize 200M
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

What happened??
The reason you are not able to free the space is because we have certain objects created in between and there is no chunk of 1800MB free. The structure of this file is as given below.
X -> Occupied Block
0 -> Free Block
1.jpg
So here block 35001, 13433 are free, but in-between blocks are occupied. When you are trying to reduce the size of datafile by 1800 MB, you are going beyond block ID 35001, where the objects do exits. After those objects there may be free space, but that doesnt matter, you can at max reduce the size of datafile until block ID 35001, and not beyond that.
So here if we see that there are around 220992 blocks free after block ID 35001. That makes a space of around (220992 X 8192 = 1810366464 bytes = 1726.5 MB free). Thats what the bytes column is showing.
So we can reduce the size of this datafile by maximum of 1810366464 bytes = 172.5 MB. If you try to reduce the size more than that, then you will hit with an error ORA-03297: file contains used data beyond requested RESIZE value.
Now try reducing by 1700 MB and it will succeed !!!
SQL> alter database datafile 194 resize 300M;
database altered
What if you want to still reduce the size beyond 1800 MB. i.e. what if you want to make the size to 200MB.
To do that, you need to do following steps.
- Find block ID at 1800th byte.
We know that last 1810366464 bytes are free.
1800 MB = 1887436800 Bytes
Bytes Remaining = 1887436800 – 1810366464 = 77070336 Bytes = 9408 Blocks
Block ID till we want to clear the objects id  35001 – 9408 = 25593
- Drop all the objects which are beyond this block. That will make 1800MB free in this datafile and you can reduce the size of datafile by 1800 MB.
Moral of story is dont get into these many calculations. Its good to know the logic and helps in crisis situation. But not good to go for such drop object, reduce the size if file and recreate the object again.

For 9i or higher versions script.



REM Script is meant for Oracle version 9 and higher
REM -----------------------------------------------

set serveroutput on
exec dbms_output.enable(1000000);

declare

cursor c_dbfile is
select f.tablespace_name,f.file_name,f.file_id,f.blocks,t.block_size
from dba_data_files f,
dba_tablespaces t
where f.tablespace_name = t.tablespace_name
and t.status = 'ONLINE'
order by f.tablespace_name,f.file_id;

cursor c_freespace(v_file_id in number) is
select block_id, block_id+blocks max_block
from dba_free_space
where file_id = v_file_id
order by block_id desc;

/* variables to check settings/values */
dummy number;
checkval varchar2(10);
block_correction number;

/* running variable to show (possible) end-of-file */
file_min_block number;

/* variables to check if recycle_bin is on and if extent as checked is in ... */
recycle_bin boolean:=false;
extent_in_recycle_bin boolean;

/* exception handler needed for non-existing tables note:344940.1 */
sqlstr varchar2(100);
table_does_not_exist exception;
pragma exception_init(table_does_not_exist,-942);

begin

/* recyclebin is present in Oracle 10.2 and higher and might contain extent as checked */
begin
select value into checkval from v$parameter where name = 'recyclebin';
if checkval = 'on'
then
recycle_bin := true;
end if;
exception
when no_data_found
then
recycle_bin := false;
end;

/* main loop */
for c_file in c_dbfile
loop
/* initialization of loop variables */
dummy :=0;
extent_in_recycle_bin := false;
file_min_block := c_file.blocks;

begin


<<check_free>>

for c_free in c_freespace(c_file.file_id)
loop
/* if blocks is an uneven value there is a need to correct with -1 to compare with end-of-file which is even */
block_correction := (0-mod(c_free.max_block,2));
if file_min_block = c_free.max_block+block_correction
then

/* free extent is at end so file can be resized */
file_min_block := c_free.block_id;

else
/* no more free extent at end of file, file cannot be further resized */
exit check_free;
end if;
end loop;
end;

/* check if file can be resized, minimal size of file 16 blocks */
if (file_min_block = c_file.blocks) or (c_file.blocks <= 16)
then

dbms_output.put_line('Tablespace: '||c_file.tablespace_name||' Datafile: '||c_file.file_name);
dbms_output.put_line('cannot be resized no free extents found');
dbms_output.put_line('.');

else

/* file needs minimal no of blocks which does vary over versions */
if file_min_block < 16
then
file_min_block := 16;
end if;

dbms_output.put_line('Tablespace: '||c_file.tablespace_name||' Datafile: '||c_file.file_name);
dbms_output.put_line('current size: '||(c_file.blocks*c_file.block_size)/1024||'K'||' can be resized to: '||round((file_min_block*c_file.block_size)/1024)||'K (reduction of: '||round(((c_file.blocks-file_min_block)/c_file.blocks)*100,2)||' %)');

/* below is only true if recyclebin is on */
if recycle_bin
then
begin
sqlstr:='select distinct 1 from recyclebin$ where file#='||c_file.file_id;
execute immediate sqlstr into dummy;

if dummy > 0
then

dbms_output.put_line('Extents found in recyclebin for above file/tablespace');
dbms_output.put_line('Implying that purge of recyclebin might be needed in order to resize');
dbms_output.put_line('SQL> purge tablespace '||c_file.tablespace_name||';');
end if;
exception
when no_data_found
then null;
when table_does_not_exist
then null;
end;
end if;
dbms_output.put_line('SQL> alter database datafile '''||c_file.file_name||''' resize '||round((file_min_block*c_file.block_size)/1024)||'K;');
dbms_output.put_line('.');

end if;

end loop;

end;
/

Example output for Oracle version 9 and higher:
Tablespace: TEST Datafile: /oradata/v112/test01.dbf
cannot be resized no free extents found
.
Tablespace: UNDOTBS1 Datafile: /oradata/v112/undotbs01.dbf
current size: 9384960K can be resized to: 106496K (reduction of: 98.87 %)
SQL> alter database datafile '/oradata/v112/undotbs01.dbf' resize 106496K;
.
Tablespace: USERS Datafile: /oradata/v112/users01.dbf
current size: 328960K can be resized to: 117248K (reduction of: 64.36 %)
Extents found in recyclebin for above file/tablespace
Implying that purge of recyclebin might be needed in order to resize
SQL> purge tablespace USERS;
SQL> alter database datafile '/oradata/v112/users01.dbf' resize 117248K;

Friday, December 3, 2010

3. Redo Log File Management

What Is the Online Redo Log?
The most crucial structure for recovery operations is the online redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle database has an associated online redo log to protect the database in case of an instance failure.
Redo Threads
Each database instance has its own online redo log groups. These online redo log groups, multiplexed or not, are called an instance's thread of online redo. In typical configurations, only one database instance accesses an Oracle database, so only one thread is present. When running Oracle Real Application Clusters, however, two or more instances concurrently access a single database and each instance has its own thread.
This chapter describes how to configure and manage the online redo log when the Oracle9i Real Application Clusters feature is not used. Hence, the thread number can be assumed to be 1 in all discussions and examples of statements.
Online Redo Log Contents
Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the rollback segment data block, and the transaction table of the rollback segments.
Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. Therefore, the online redo log also protects rollback data. When you recover the database using redo data, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks.
Redo records are buffered in a circular fashion in the redo log buffer of the SGA (see "How Oracle Writes to the Online Redo Log") and are written to one of the online redo log files by the Oracle background process Log Writer (LGWR). Whenever a transaction is committed, LGWR writes the transaction's redo records from the redo log buffer of the SGA to an online redo log file, and a system change number (SCN) is assigned to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.
Redo records can also be written to an online redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to an online redo log file, even though some redo records may not be committed. If necessary, Oracle can roll back these changes.
How Oracle Writes to the Online Redo Log
The online redo log of a database consists of two or more online redo log files. Oracle requires a minimum of two files to guarantee that one is always available for writing while the other is being archived (if in ARCHIVELOG mode).
LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills, LGWR begins writing to the next available online redo log file. When the last available online redo log file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again. Figure 7-1 illustrates the circular writing of the online redo log file. The numbers next to each line indicate the sequence in which LGWR writes to each online redo log file.
Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled.
* If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the datafiles.
* If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.
Figure Circular Use of Online Redo Log Files by LGWR


Active (Current) and Inactive Online Redo Log Files
At any given time, Oracle uses only one of the online redo log files to store redo records written from the redo log buffer. The online redo log file that LGWR is actively writing to is called the current online redo log file.
Online redo log files that are required for instance recovery are called active online redo log files. Online redo log files that are not required for instance recovery are called inactive.
If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an active online log file until ARCn has archived its contents. If archiving is disabled (NOARCHIVELOG mode), when the last online redo log file fills writing continues by overwriting the first available active file.
Log Switches and Log Sequence Numbers
A log switch is the point at which Oracle ends writing to one online redo log file and begins writing to another. Normally, a log switch occurs when the current online redo log file is completely filled and writing must continue to the next online redo log file. However, you can specify that a log switch occurs in a time-based manner, regardless of whether the current online redo log file is completely filled. You can also force log switches manually.
Oracle assigns each online redo log file a new log sequence number every time that a log switch occurs and LGWR begins writing to it. If Oracle archives online redo log files, the archived log retains its log sequence number. The online redo log file that is cycled back for use is given the next available log sequence number.
Each online or archived redo log file is uniquely identified by its log sequence number. During crash, instance, or media recovery, Oracle properly applies redo log files in ascending order by using the log sequence number of necessary archived and online redo log files.
Planning the Online Redo Log
This section describes guidelines you should consider when configuring a database instance's online redo log, and contains the following topics:
* Multiplexing Online Redo Log Files
* Placing Online Redo Log Members on Different Disks
* Setting the Size of Online Redo Log Members
* Choosing the Number of Online Redo Log Files
* Controlling Archive Lag
Multiplexing Online Redo Log Files

Oracle provides the capability to multiplex an instance's online redo log files to safeguard against damage to its online redo log files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure.
Note:
Oracle recommends that you multiplex your redo log files. The loss of the log file data can be catastrophic if recovery is required.
Figure Multiplexed Online Redo Log Files



The corresponding online redo log files are called groups. Each online redo log file in a group is called a member. In Figure , A_LOG1 and B_LOG1 are both members of Group 1, A_LOG2 and B_LOG2 are both members of Group 2, and so forth. Each member in a group must be exactly the same size.
Notice that each member of a group is concurrently active, or, concurrently written to by LGWR, as indicated by the identical log sequence numbers assigned by LGWR. In Figure 7-2, first LGWR writes to A_LOG1 in conjunction with B_LOG1, then A_LOG2 in conjunction with B_LOG2, and so on. LGWR never writes concurrently to members of different groups (for example, to A_LOG1 and B_LOG2).
Responding to Online Redo Log Failure
Whenever LGWR cannot write to a member of a group, Oracle marks that member as INVALID and writes an error message to the LGWR trace file and to the database's alert file to indicate the problem with the inaccessible files. LGWR reacts differently when certain online redo log members are unavailable, depending on the reason for the unavailability.
If Then
LGWR can successfully write to at least one member in a group

Writing proceeds as normal. LGWR simply writes to the available members of a group and ignores the unavailable members.
LGWR cannot access the next group at a log switch because the group needs to be archived

Database operation temporarily halts until the group becomes available, or, until the group is archived.
All members of the next group are inaccessible to LGWR at a log switch because of media failure

Oracle returns an error and the database instance shuts down. In this case, you may need to perform media recovery on the database from the loss of an online redo log file.
If the database checkpoint has moved beyond the lost redo log, media recovery is not necessary since Oracle has saved the data recorded in the redo log to the datafiles. Simply drop the inaccessible redo log group. If Oracle did not archive the bad log, use ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before the log can be dropped.
If all members of a group suddenly become inaccessible to LGWR while it is writing to them

Oracle returns an error and the database instance immediately shuts down. In this case, you may need to perform media recovery. If the media containing the log is not actually lost--for example, if the drive for the log was inadvertently turned off--media recovery may not be needed. In this case, you only need to turn the drive back on and let Oracle perform instance recovery.
Legal and Illegal Configurations
To safeguard against a single point of online redo log failure, a multiplexed online redo log is ideally symmetrical: all groups of the online redo log have the same number of members. Nevertheless, Oracle does not require that a multiplexed online redo log be symmetrical. For example, one group can have only one member, while other groups have two members. This configuration protects against disk failures that temporarily affect some online redo log members but leave others intact.
The only requirement for an instance's online redo log is that it have at least two groups. Figure legal and illegal multiplexed online redo log configurations. The second configuration is illegal because it has only one group.
Figure Legal and Illegal Multiplexed Online Redo Log Configuration


Placing Online Redo Log Members on Different Disks
When setting up a multiplexed online redo log, place members of a group on different disks. If a single disk fails, then only one member of a group becomes unavailable to LGWR and other members remain accessible to LGWR, so the instance can continue to function.
If you archive the redo log, spread online redo log members across disks to eliminate contention between the LGWR and ARCn background processes. For example, if you have two groups of duplexed online redo log members, place each member on a different disk and set your archiving destination to a fifth disk. Consequently, there is never contention between LGWR (writing to the members) and ARCn (reading the members).
Datafiles and online redo log files should also be on different disks to reduce contention in writing data blocks and redo records.
Setting the Size of Online Redo Log Members
When setting the size of online redo log files, consider whether you will be archiving the redo log. Online redo log files should be sized so that a filled group can be archived to a single unit of offline storage media (such as a tape or disk), with the least amount of space on the medium left unused. For example, suppose only one filled online redo log group can fit on a tape and 49% of the tape's storage capacity remains unused. In this case, it is better to decrease the size of the online redo log files slightly, so that two log groups could be archived for each tape.
With multiplexed groups of online redo logs, all members of the same group must be the same size. Members of different groups can have different sizes. However, there is no advantage in varying file size between groups. If checkpoints are not set to occur between log switches, make all groups the same size to guarantee that checkpoints occur at regular intervals.
Choosing the Number of Online Redo Log Files
The best way to determine the appropriate number of online redo log files for a database instance is to test different configurations. The optimum configuration has the fewest groups possible without hampering LGWR's writing redo log information.
In some cases, a database instance may require only two groups. In other situations, a database instance may require additional groups to guarantee that a recycled group is always available to LGWR. During testing, the easiest way to determine if the current online redo log configuration is satisfactory is to examine the contents of the LGWR trace file and the database's alert log. If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.
Consider the parameters that can limit the number of online redo log files before setting up or altering the configuration of an instance's online redo log. The following parameters limit the number of online redo log files that you can add to a database:
* The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of online redo log files for each database. Group values can range from 1 to MAXLOGFILES. The only way to override this upper limit is to re-create the database or its control file. Thus, it is important to consider this limit before creating a database. If MAXLOGFILES is not specified for the CREATE DATABASE statement, Oracle uses an operating system specific default value.
* The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group. As with MAXLOGFILES, the only way to override this upper limit is to re-create the database or control file. Thus, it is important to consider this limit before creating a database. If no MAXLOGMEMBERS parameter is specified for the CREATE DATABASE statement, Oracle uses an operating system default value.
You can force all enabled online redo log threads to switch their current logs in a time-based fashion. In a primary/standby configuration, changes are made available to the standby database by archiving and shipping logs of the primary site to the standby database. The changes that are being applied by the standby database can lag the changes that are occurring on the primary database.
This lag can happen because the standby database must wait for the changes in the primary database's online redo log to be archived (into the archived redo log) and then shipped to it. To control or limit this lag, you set the ARCHIVE_LAG_TARGET initialization parameter. Setting this parameter allows you to limit, measured in time, how long the lag can become.
Setting the ARCHIVE_LAG_TARGET Initialization Parameter
When you set the ARCHIVE_LAG_TARGET initialization parameter, you cause Oracle to examine an instance's current online redo log periodically. If the following conditions are met the instance will switch the log:
* The current log was created prior to n seconds ago, and the estimated archival time for the current log is m seconds (proportional to the number of redo blocks used in the current log), where n + m exceeds the value of the ARCHIVE_LAG_TARGET initialization parameter.
* The current log contains redo records.
In an Oracle Real Application Clusters environment, the instance also nudges other threads into switching and archiving logs if they are falling behind. This can be particularly useful when one instance in the cluster is more idle than the other instances (as when you are running a 2-node primary/secondary configuration of Oracle Real Application Clusters).
Initialization parameter ARCHIVE_LAG_TARGET specifies the target of how many seconds of redo the standby could lose in the event of a primary shutdown or crash if the Data Guard environment is not configured in a no-data-loss mode. It also provides an upper limit of how long (in the number of seconds) the current log of the primary database can span. Because the estimated archival time is also considered, this is not the exact log switch time.
The following initialization parameter setting sets the log switch interval to 30 minutes (a typical value).
ARCHIVE_LAG_TARGET = 1800
A value of 0 disables this time-based log switching functionality. This is the default setting.
You can set the ARCHIVE_LAG_TARGET initialization parameter even if there is no standby database. For example, the ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived.
ARCHIVE_LAG_TARGET is a dynamic parameter and can be set with the ALTER SYSTEM SET statement.
Caution:
The ARCHIVE_LAG_TARGET parameter must be set to the same value in all instances of an Oracle Real Application Clusters environment. Failing to do so results in unspecified behavior and is strongly discouraged.
Factors Affecting the Setting of ARCHIVE_LAG_TARGET
Consider the following factors when determining if you want to set the ARCHIVE_LAG_TARGET parameter and in determining the value for this parameter.
* Overhead of switching (as well as archiving) logs
* How frequently normal log switches occur as a result of log full conditions
* How much redo loss is tolerated in the standby database
Setting ARCHIVE_LAG_TARGET may not be very useful if natural log switches already occur more frequently than the interval specified. However, in the case of irregularities of redo generation speed, the interval does provide an upper limit for the time range each current log covers.
If the ARCHIVE_LAG_TARGET initialization parameter is set to a very low value, there can be a negative impact on performance. This can force frequent log switches. Set the parameter to a reasonable value so as not to degrade the performance of the primary database.
Creating Online Redo Log Groups and Members
Plan the online redo log of a database and create all required groups and members of online redo log files during database creation. However, there are situations where you might want to create additional groups or members. For example, adding groups to an online redo log can correct redo log group availability problems.
To create new online redo log groups and members, you must have the ALTER DATABASE system privilege. A database can have up to MAXLOGFILES groups.
To create a new group of online redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause.
The following statement adds a new group of redo logs to the database:
ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 500K;
Note:
Use fully specify filenames of new log members to indicate where the operating system file should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system.
You can also specify the number that identifies the group using the GROUP option:
ALTER DATABASE
ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo')
SIZE 500K;
Using group numbers can make administering redo log groups easier. However, the group number must be between 1 and MAXLOGFILES. Do not skip redo log file group numbers (that is, do not number your groups 10, 20, 30, and so on), or you will consume space in the control files of the database.
Creating Online Redo Log Members
In some cases, it might not be necessary to create a complete group of online redo log files. A group could already exist, but not be complete because one or more members of the group were dropped (for example, because of a disk failure). In this case, you can add new members to an existing group.
To create new online redo log members for an existing group, use the SQL statement ALTER DATABASE with the ADD LOG MEMBER parameter. The following statement adds a new redo log member to redo log group number 2:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;
Notice that filenames must be specified, but sizes need not be. The size of the new members is determined from the size of the existing members of the group.
When using the ALTER DATABASE statement, you can alternatively identify the target group by specifying all of the other members of the group in the TO parameter, as shown in the following example:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo'
TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo');
Note:
Fully specify the filenames of new log members to indicate where the operating system files should be created. Otherwise, the files will be created in either the default or current directory of the database server, depending upon your operating system. You may also note that the status of the new log member is shown as INVALID. This is normal and it will change to active (blank) when it is first used.
Relocating and Renaming Online Redo Log Members
You can use operating system commands to relocate online redo logs, then use the ALTER DATABASE statement to make their new names (locations) known to the database. This procedure is necessary, for example, if the disk currently used for some online redo log files is going to be removed, or if datafiles and a number of online redo log files are stored on the same disk and should be separated to reduce contention.
To rename online redo log members, you must have the ALTER DATABASE system privilege. Additionally, you might also need operating system privileges to copy files to the desired location and privileges to open and back up the database.
Before relocating your redo logs, or making any other structural changes to the database, completely back up the database in case you experience problems while performing the operation. As a precaution, after renaming or relocating a set of online redo log files, immediately back up the database's control file.
Use the following steps for relocating redo logs. The example used to illustrate these steps assumes:
* The log files are located on two disks: diska and diskb.
* The online redo log is duplexed: one group consists of the members /diska/logs/log1a.rdo and /diskb/logs/log1b.rdo, and the second group consists of the members /diska/logs/log2a.rdo and /diskb/logs/log2b.rdo.
* The online redo log files located on diska must be relocated to diskc. The new filenames will reflect the new location: /diskc/logs/log1c.rdo and /diskc/logs/log2c.rdo.
Steps for Renaming Online Redo Log Members
1. Shut down the database.
SHUTDOWN
2. Copy the online redo log files to the new location.
Operating system files, such as online redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files.
Note:
You can execute an operating system command to copy a file (or perform other operating system commands) without exiting SQL*Plus by using the HOST command. Some operating systems allow you to use a character in place of the word HOST. For example, you can use ! in UNIX.
The following example uses operating system commands (UNIX) to move the online redo log members to a new location:
mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo
3. Startup the database, mount, but do not open it.
CONNECT / as SYSDBA
STARTUP MOUNT
4. Rename the online redo log members.
Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database's online redo log files.
ALTER DATABASE
RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo'
TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';
5. Open the database for normal operation.
The online redo log alterations take effect when the database is opened.
ALTER DATABASE OPEN;
Dropping Online Redo Log Groups and Members
In some cases, you may want to drop an entire group of online redo log members. For example, you want to reduce the number of groups in an instance's online redo log. In a different case, you may want to drop one or more specific online redo log members. For example, if a disk failure occurs, you may need to drop all the online redo log files on the failed disk so that Oracle does not try to write to the inaccessible files. In other situations, particular online redo log files become unnecessary. For example, a file might be stored in an inappropriate location.
Dropping Log Groups
To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
* An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
* You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
* Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
--------- --- ----------------
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
Dropping Online Redo Log Members
To drop an online redo log member, you must have the ALTER DATABASE system privilege. Consider the following restrictions and precautions before dropping individual online redo log members:
* It is permissible to drop online redo log files so that a multiplexed online redo log becomes temporarily asymmetric. For example, if you use duplexed groups of online redo log files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately so that all groups have at least two members, and thereby eliminate the single point of failure possible for the online redo log.
* An instance always requires at least two valid groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the member until the other members become valid. To see a redo log file's status, use the V$LOGFILE view. A redo log file becomes INVALID if Oracle cannot access it. It becomes STALE if Oracle suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group.
* You can drop an online redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur.
* Make sure the group to which an online redo log member belongs is archived (if archiving is enabled) before dropping the member. To see whether this has happened, use the V$LOG view.
To drop specific inactive online redo log members, use the ALTER DATABASE statement with the DROP LOGFILE MEMBER clause.
The following statement drops the redo log /oracle/dbs/log3c.rdo:
ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
When an online redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping an online redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log file.
To drop a member of an active group, you must first force a log switch.
Forcing Log Switches
A log switch occurs when LGWR stops writing to one online redo log group and starts writing to another. By default, a log switch occurs automatically when the current online redo log file group fills.
You can force a log switch to make the currently active group inactive and available for online redo log maintenance operations. For example, you want to drop the currently active group, but are not able to do so until the group is inactive. You may also wish to force a log switch if the currently active group needs to be archived at a specific time before the members of the group are completely filled. This option is useful in configurations with large online redo log files that take a long time to fill.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM statement with the SWITCH LOGFILE clause.
The following statement forces a log switch:
ALTER SYSTEM SWITCH LOGFILE;
Verifying Blocks in Redo Log Files
You can configure Oracle to use checksums to verify blocks in the redo log files. If you set the initialization parameter DB_BLOCK_CHECKSUM to TRUE, block checking is enabled for all Oracle database blocks written to disk, including redo log blocks. The default value of DB_BLOCK_CHECKSUM is FALSE.
If you enable block checking, Oracle computes a checksum for each redo log block written to the current log. Oracle writes the checksum in the header of the block. Oracle uses the checksum to detect corruption in a redo log block. Oracle tries to verify the redo log block when it writes the block to an archive log file and when the block is read from an archived log during recovery.
If Oracle detects a corruption in a redo log block while trying to archive it, the system attempts to read the block from another member in the group. If the block is corrupted in all members the redo log group, then archiving cannot proceed.
Note:
There is some overhead and decrease in database performance with DB_BLOCK_CHECKSUM enabled. Monitor your database performance to decide if the benefit of using data block checksums to detect corruption outweights the performance impact.
Clearing an Online Redo Log File
An online redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used reinitialize the file without shutting down the database.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is not possible:
* If there are only two log groups
* The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared redo logs are available for use even though they were not archived.
If you clear a log file that is needed for recovery of a backup, then you can no longer recover from that backup. Oracle writes a message in the alert log describing the backups from which you cannot recover.
Note:
If you clear an unarchived redo log file, you should make another backup of the database.
If you want to clear an unarchived redo log that is needed to bring an offline tablespace online, use the UNRECOVERABLE DATAFILE clause in the ALTER DATABASE CLEAR LOGFILE statement.
If you clear a redo log needed to bring an offline tablespace online, you will not be able to bring the tablespace online again. You will have to drop the tablespace or perform an incomplete recovery. Note that tablespaces taken offline normal do not require recovery.
Viewing Online Redo Log Information
Use the following views to display online redo log information.
View Description
V$LOG

Displays the redo log file information from the control file
V$LOGFILE

Identifies redo log groups and members and member status
V$LOG_HISTORY

Contains log history information
The following query returns the control file information about the online redo log for a database.
SELECT * FROM V$LOG;
GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- --------- ------------- ---------
1 1 10605 1048576 1 YES ACTIVE 11515628 16-APR-00
2 1 10606 1048576 1 NO CURRENT 11517595 16-APR-00
3 1 10603 1048576 1 YES INACTIVE 11511666 16-APR-00
4 1 10604 1048576 1 YES INACTIVE 11513647 16-APR-00
To see the names of all of the member of a group, use a query similar to the following:
SELECT * FROM V$LOGFILE;
GROUP# STATUS MEMBER
------ ------- ----------------------------------
1 D:\ORANT\ORADATA\IDDB2\REDO04.LOG
2 D:\ORANT\ORADATA\IDDB2\REDO03.LOG
3 D:\ORANT\ORADATA\IDDB2\REDO02.LOG
4 D:\ORANT\ORADATA\IDDB2\REDO01.LOG
If STATUS is blank for a member, then the file is in use.