Quick performance checks


Quick performance checks:
======================


1. Login Check:
1. Please login to client URL and check the speed.


2. Find Blocking sessions:
select blocking_session, sid, serial#, wait_class,seconds_in_wait from v$session where blocking_session is not NULL order by blocking_session;

3. Find Database Locks:
select
(select username || ' - ' || osuser from v$session where sid=a.sid) blocker,
a.sid || ', ' ||
(select serial# from v$session where sid=a.sid) sid_serial,
' is blocking ',
(select username || ' - ' || osuser from v$session where sid=b.sid) blockee,
b.sid || ', ' ||
(select serial# from v$session where sid=b.sid) sid_serial
from v$lock a, v$lock b
where a.block = 1
and b.request > 0
and a.id1 = b.id1
and a.id2 = b.id2;

Archive Hung Error Steps

Archive Hung error:
===============
 
Scenario#1
=========

Error Description: Your database is running on ARCHIVELOG MODE and when you try to connect through sqlplus you are getting following error. ORA-00257:  archiver is stuck. CONNECT INTERNAL only, until freed.
Problem Description: If your database is running on archive log mode, database redo log files will be frequently archived to the archive log destination.
The frequency of the redo archiving is depended on the volume of the transaction on the database.  If the archive log destination is full then the archiver (ARCH) background process cannot write the archieve files into the archive log destination. Further, the database will be in hung state no connections will be allowed to the database. In this situation if you try to connect the database you get following error message. ORA-00257:  archiver is stuck. CONNECT INTERNAL.
Solution Description:
Check your database archive log destination (LOG_ARCHIVE_DEST). The file system must be full and no more room for additional files. Remove some archive files into tape storage or delete permanently if you have already backed up.  If you have a successful physical database backup(HOT/COLD), you can remove the archive files which got generate before the physical backup. Archive will not start automatically even after the file system clean up. You have to connect to the database sys as sysdba and execute
Alter system archive log all;

Recommendation:
Implement the file system monitor scripts with some threshold limit especially for archive log destination so that you get alarms when the file system reaches the threshold. Also periodically backup/move the older archive files to permanent storage or to other file system where you have the space. Otherwise if your database is a critical production database, you need to face a outage on the database.

Scenario#2
=========
Error Description:
The database is running in ARCHIVE MODE and one fine day when you try to connect via SQLPLUS, it throws following error. ORA-00257:  archiver is stuck. CONNECT INTERNAL only, until freed.
When you try to connect via sqlplus as internal and issue.

Alter system archive log all;
You are getting below mentioned error message
ORA-16020:  less destinations available than specified by
             LOG_ARCHIVE_MIN_SUCCEED_DEST

If you issue
select * from v$archive_dest
then you are getting below error message
ORA-19504:  failed to create file %s

When space becomes available again after cleaning up the destination automatic achieving is not resumed.
00257, 00000, "archiver error. Connect internal only, until freed."
Cause:  The archiver process received an error while trying to archive a redo log.  If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.                                        
Action:  Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.
 16020, 00000, "less destinations available than specified by LOG_ARCHIVE_MIN_SU"
Cause:  With automatic archiving enabled, the number of archive log  destinations that could be used for the database was less than the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter value.
 Action: Either adjust the settings of the log archive destination parameters, or lower the value of LOG_ARCHIVE_MIN_SUCCEED_DEST                 
 19504, 00000, "failed to create file \"%s\""
Cause:  call to create file returned an error
Action: check additional messages, check access permissions

Solution Description
Even if you clean up the archive log destination by moving some archive files to table or to some other location, and archive destination has enough room for additional new files, you get this error message. For each and every archive destination give correct the archivelogpath and issue.
alter system set LOG_ARCHIVE_DEST_.. = 'location=/archivelogpath reopen';   

Solution Explanation:
Automatic archiving got stuck due to the fact that the archive destination filled up, there is no more space available.  When space becomes available again after archives have been moved to tape (or elsewhere) automatic archiving is not automatically resumed unfortunately.


Scenario#3
=========

Error Description:
The database is running in ARCHIVE MODE and one fine day when you try to connect via SQLPLUS, it throws following error.
0RA-00257:archiver error, connect internal only until freed
ORA-16014:log 2 sequence# 1789 not archived, no available destinations
ORA-00312:online log 2 thread 1:'/u02/oradata/prod/redo/redo02.log'

Problem Description
The error is due to the flash_recovery_area is full.
SQL>select group#,status archived from v$log;
    GROUP# ARCHIVED
    ------------  ------------------------
    1                INVALIDATED
    2                INACTIVE
    3                INACTIVE
If you issue the solution command to start archive all file, it is giving the following error.
SQL>archive log all;
ORA-16020: less destinations available than specified by
LOG_ARCHIVE_MIN_SUCCEEDED_DEST

The only one destination allowed (log_archive_min_succeed_dest= 1) is not able to perform the archiving.
Solution Description
1. Either allow more space in the DB_RECOVERY_FILE_DEST with the DB_RECOVERY_FILE_DEST_SIZE parameter:
SQL> alter system set db_recovery_file_dest_size=3G ;
2. Or to avoid the situation once the 3Gb is full , set the following parameters so that when the dest1 is full, archiving is automatically  performed to the alternate dest2 :
log_archive_dest_1='LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2'
log_archive_dest_2='LOCATION=/other_dest_for_archiving'
log_archive_dest_state_1='enable'
log_archive_dest_state_2='alternate'
db_recovery_file_dest='/u01/app/oracle/product/10.1.0/db_1/flash_recovery_area'
db_recovery_file_dest_size=2G
Solution
1. Confirm whether the database is running in archive log mode and automatic archiving is enabled.
SQL> archive log list;

2. If archive destination is defined by USE_DB_RECOVERY_FILE_DEST, determine the archive destination by:
SQL> show parameter db_recovery_file_dest;
Also check the value set for db_recovery_file_dest_size

3. Check the space used in flash recovery area by:
SQL> SELECT * FROM V$RECOVERY_FILE_DEST;

4. If the SPACE_USED is reached SPACE_LIMIT, cleanup the archive logs to a different destination.

5. Archive all the log files
SQL> alter system archive log all;

6. If ORA-16020: less destinations available than specified by LOG_ARCHIVE_MIN_SUCCEED_DEST
is received for step 5, then for each destination give the correct archivelog path and issue:
SQL>alter system set LOG_ARCHIVE_DEST_.. = 'location=/archivelogpath reopen';

TIP: If you have configured Flash Recovery Area (default in 10gR2)--meaning that you have set the two initialization parameters:
DB_RECOVERY_FILE_DEST_SIZE
DB_RECOVERY_FILE_DEST
you cannot use the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters any more. You must always use the LOG_ARCHIVE_DEST_n parameters in case you have configured flash recovery area. LOG_ARCHIVE_DEST_10 is implicitly set to USE_DB_RECOVERY_FILE_DEST if you create a recovery area and do not set any other local archiving destinations. If you try to use LOG_ARCHIVE_DEST with a Flash Recovery Area configured, you will receive errors like:
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16018: cannot use LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n or
DB_RECOVERY_FILE_DEST

7. Just switch the logs to verify:
SQL> alter system switch logfile;

Below mentioned are steps to increase the flash recovery area.
1. Find out the flash recovery area location.
SQL> show parameter db_recovery_file_dest;

2. Disable the Flash Recovery Area
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='' SCOPE=BOTH SID='*';

3. Increase the Flash Recovery Area
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 10g SCOPE=BOTH SID='*';
4. Enable the Flash Recovery Area
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = '/dir1' SCOPE=BOTH SID='*';

If the flash recovery area location is an Automatic Storage Management disk group named disk1, for example, then you can do the following:
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = '+disk1' SCOPE=BOTH SID='*';

Some Additional Tips:
Archiving & Oracle archived redo logs

Explanation From Oracle documentation
"Oracle Database lets you save filled groups of redo log files to one or more offline destinations, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving. This process is only possible if the database is running in ARCHIVELOG mode. You can choose automatic or manual archiving.
An archived redo log file is a copy of one of the filled members of a redo log group. It includes the redo entries and the unique log sequence number of the identical member of the redo log group. For example, if you are multiplexing your redo log, and if group 1 contains identical member files a_log1 and b_log1, then the archiver process (ARCn) will archive one of these member files. Should a_log1 become corrupted, then ARCn can still archive the identical b_log1. The archived redo log contains a copy of every group created since you enabled archiving.
When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot reuse and hence overwrite a redo log group until it has been archived. The background process ARCn automates archiving operations when automatic archiving is enabled. The database starts multiple archiver processes as needed to ensure that the archiving of filled redo logs does not fall behind."

Purpose of Archive logs
•             To Recover a database
•             To Update a standby database
•             To Get information about the history of a database using the LogMiner utility

Oracle database can be run in two modes 1. ARCHIVELOG mode 2. NOARCHIVELOG mode
NOARCHIVELOG Mode:

When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log.
NOARCHIVELOG mode protects a database from instance failure but not from media failure. Only the most recent changes made to the database, which are stored in the online redo log groups, are available for instance recovery. If a media failure occurs while the database is in NOARCHIVELOG mode, you can only restore the database to the point of the most recent full database backup. You cannot recover transactions subsequent to that backup.
In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you use online tablespace backups taken earlier while the database was in ARCHIVELOG mode. To restore a database operating in NOARCHIVELOG mode, you can use only whole database backups taken while the database is closed. Therefore, if you decide to operate a database in NOARCHIVELOG mode, take whole database backups at regular, frequent intervals.

ARCHIVELOG Mode:
When you run a database in ARCHIVELOG mode, you enable the archiving of the redo log. The database control file indicates that a group of filled redo log files cannot be reused by LGWR until the group is archived. A filled group becomes available for archiving immediately after a redo log switch occurs.

The archiving of filled groups has these advantages:
A database backup, together with online and archived redo log files, guarantees that you can recover all committed transactions in the event of an operating system or disk failure.
If you keep an archived log, you can use a backup taken while the database is open and in normal system use.
You can keep a standby database current with its original database by continuously applying the original archived redo logs to the standby.
So, If you cannot afford to lose any data in your database in the event of a disk failure, use ARCHIVELOG mode.
In reality most of the Oracle databases run in ARCHIVELOG mode.
The Oracle error ORA-000257, which we are discussing occurs in the databases that run in ARCHIVELOG mode.

Where does the Archive Logs stored/resides?

Oracle database lets you specify whether to archive, redo logs to a single destination or multiplex them. If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive only to a primary and secondary destination (using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST).
Two methods for specifying archive destination:
Method 1: Using the LOG_ARCHIVE_DEST_n Parameter
Use the LOG_ARCHIVE_DEST_n parameter (where n is an integer from 1 to 10) to specify from one to ten different destinations for archival. Each numerically suffixed parameter uniquely identifies an individual destination.
Steps for setting archivelog destination:
1. Use SQL*Plus to shut down the database.
SHUTDOWN
2. Set the LOG_ARCHIVE_DEST_n initialization parameter to specify from one to ten archiving locations. The LOCATION keyword specifies an operating system specific path name. For example, enter:
LOG_ARCHIVE_DEST_1 = 'LOCATION = /oradata1/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /oradata2/archive'
3. Optionally, set the LOG_ARCHIVE_FORMAT initialization parameter, using %t to include the thread number as part of the file name, %s to include the log sequence number, and %r to include the resetlogs ID (a timestamp value represented in ub4). Use capital letters (%T, %S, and %R) to pad the file name to the left with zeroes.
example for UNIX OS:
LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
Method 2: Using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST

To specify a maximum of two locations, use the LOG_ARCHIVE_DEST parameter to specify a primary archive destination and the LOG_ARCHIVE_DUPLEX_DEST to specify an optional secondary archive destination. All locations must be local. Whenever the database archives a redo log, it archives it to every destination specified by either set of parameters.
Perform the following steps the use method 2:
1. Use SQL*Plus to shut down the database.
SHUTDOWN
2. Specify destinations for the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter (you can also specify LOG_ARCHIVE_DUPLEX_DEST dynamically using the ALTER SYSTEM statement). For example, enter:
LOG_ARCHIVE_DEST = '/disk1/archive'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'
Step3) Set the LOG_ARCHIVE_FORMAT initialization parameter.
example for UNIX OS:
LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
The Oracle error ORA-000257, which we are discussing occurs in the databases that run in ARCHIVELOG mode when the above mentioned archive destinations are full and when the archiver was unable to write to these archive destinations
In order to resolve this error you need to find the archive log info from the database. For that Oracle provides number of internal views and commands. The following are those details:

Oracle Archive related Views:
V$DATABASE Shows if the database is in ARCHIVELOG or NOARCHIVELOG mode and if MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information.
V$ARCHIVE_DEST Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES Displays information about the state of the various archive processes for an instance.
V$BACKUP_REDOLOG Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information.
V$LOG Displays all redo log groups for the database and indicates which need to be archived.
V$LOG_HISTORY Contains log history information such as which logs have been archived and the SCN range for each archived log.

To find which redo log group requires archiving run the following query:
SELECT GROUP#, ARCHIVED
FROM SYS.V$LOG;

GROUP# ARC
-------- ---
1 YES
2 NO

To find the current archiving mode run the following query:
SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE
------------
NOARCHIVELOG

The ARCHIVE LOG LIST Command
The SQL*Plus command ARCHIVE LOG LIST displays archiving information for the connected instance. For example:
SQL> ARCHIVE LOG LIST
Database log mode Archive Mode
Automatic archival Enabled
Archive destination D:\oracle\oradata\TESTDB1\archive
Oldest online log sequence 1270
Next log sequence to archive 1274
Current log sequence 1274

This display tells you all the necessary information regarding the archived redo log settings for the current instance:
The database is currently operating in ARCHIVELOG mode.
Automatic archiving is enabled.
The archived redo log destination is D:\oracle\oradata\PROD\archive.
The oldest filled redo log group has a sequence number of 1270.
The next filled redo log group to archive has a sequence number of 1274.
The current redo log file has a sequence number of 1274.

New features in Oracle Database 12c


New features in Oracle Database 12c:

===============================================
Oracle Database 12c, c for cloud, a multi-tenant database management system, with nearly 500 new features.
  • Increased size limit for VARCHAR2, NVARCHAR2, and RAW datatypes to 32K.
  • Oracle Database 12c has new feature called "Identity Columns" which are auto-incremented at the time of insertion (like in MySQL).
  • Temporary undo (for global temporary tables) will not generate undo.
  • Oracle Database 12c Data Pump will allow turning off redo for the import operation only.
  • Enhanced statistics (Hybrid histograms for more than 254 distinct values, dynamic sampling up to eleven, and stats automatically gathered during load).
  • No need to shutdown database to take out of archive log mode.
  • Row pattern matching - "MATCH_RECOGNIZATION" (identification of patterns within a table ordered/sorted by the SQL statement).
  • Adaptive execution plans (change of the plan at runtime).
  • Duplicate Indexes - Create duplicate indexes on the same set of columns. Till Oracle 11.2, if we try to create an index using the same columns, in the same order, as an existing index, we'll get an error. In some cases, we might want two different types of index on the same data (such as in a datawarehouse where we might want a bitmap index on the leading edge of a set of columns that exists in a Btree index).
  • Centralised patching.
  • We can test patches on database copies, rolling patches out centrally once testing is complete.
  • PL/SQL inside SQL: this new feature allows to use DDL inside SQL statements (i.e.: to create a one shot function)
  • The object DEFAULT clause has been enhanced. Adding a column to an existing table with a default value (much faster with Oracle 12c and it consumes less space than before, pointer to the Oracle Data Dictionary), applies also to sequences, identity types etc...
  • Partitioning enhancements (partition truncate, cascading, global index cleanup, online moving of a partition, ...)
  • Interval-Ref Partitions - we can create a ref partition (to relate several tables with the same partitions) as a sub-partition to the interval type.
  • Oracle 12c includes database level redaction, allowing granular control of access to sensitive data.
  • Pagination query, SQL keywords (LIMIT) to replace ROWNUM records.
  • Moving and Renaming datafile is now ONLINE.
  • RMAN TABLE Point-In-Time Recovery (combination of data pump and RMAN, auxiliary instance required).
  • Oracle Enterprise Manage Express (lightweight EM Cloud Control 12c version) replaces the Oracle Database console and is installed automatically.
  • The TRUNCATE command has been enhanced with a CASCADE option which follows child records.
  • Oracle introduced the parameter PGA_AGGREGATE_LIMIT which is a real memory limit.


PL/SQL
  • PL/SQL Unit Security - A role can now be granted to a code unit. That means you can determine at a very fine grain, who can access a specific unit of code.
  • SQL WITH Clause Enhancement - In 12c, we can declare PL/SQL functions in the WITH Clause of a select statement. 
  • Implicit Result Sets - create a procedure, open a ref cursor, return the results. No types, not muss, no mess. Streamlined data access (kind of a catch up to other databases).
  • MapReduce in the Database - MapReduce can be run from PL/SQL directly in the database.
  • We can use Booleans values in dynamic PL/SQL. Still no Booleans as database types.

Compression:
Automated compression with heat map.

Optimisation can be run on live databases with no disruption. Data optimization will monitor the data usage and with policy archive old data and hot data will be compressed for faster access. Inactive data can be more aggressively compressed or archived, greatly reducing storage costs.


Advanced Row compression (for Hot Data).

Columnar Query compression (for Warm Data).
Columnar Archive compression (for Archive Data).

Data Guard:

Oracle Database 12c introduces a new redo transportation method which omits the acknowledgement (to primary) of the transaction on the standby. This feature is called "Fast Sync" redo transport.

Creating a new type of redo destination called "Far Sync Standby". A "Far Sync Standby" is composed only of the standby control files, the standby redo logs and some disk space for archive logs which shall be sent to the Standby database. Failover & Switchover operations are totally transparent as the "Far Sync Standby" cannot be used as the target.

Data Guard Broker commands have been extended. The "validate database" command to checks whether the database is ready for role transition or not.


Global Temporary Tables can now be used on an Active Guard standby database. 

Pluggable Databases:

In Oracle 12c, in a pluggable database environment, we can create a single database container, and plug multiple databases into this container. All these databases then share the exact same oracle server/background processes and memory, unlike the previous versions where each database has its own background processes and shared memory. This helps in database consolidation and reduces the overhead of managing multiple desperate databases.

Container Database (CDB): Are the core data dictionary objects that come after an Oracle database installation.

Pluggable Database (PDB): Data dictionary objects and data related to the application. We can have many PDB plugged into a single CDB.

A new admin role "CDB Administrator" has been introduced in Oracle 12.1 release databases.

Multiple databases can then share a master LGWR process, but have their own dedicated LGWR process within the container.
All Oracle database options/features are available on the PDB level.
RMAN backup at CDB level.
We can unplug a PDB from a CDB to another CDB.
PDB's can be cloned inside the CDB.
Management of PDB (clone/creation/plug/unplug/drop) are implemented as SQLs.
Extremely fast PDB-provisioning (clone inside the CDB), because each CDB comes with a “PDB Seed”
.
Database patch/upgrade management very quick as CDB is a single point of installation.
Each PDB has its own data dictionary.
Data Guard configuration on CDB as whole.
RMAN point-in-time recovery at PDB level (while other PDB's remains open).

Resource Manager is extended for creating, unplugging, plugging in, and cloning, dropping or even setting up for the open mode of the PDB.
Flashback of a PDB should be available for Oracle 12c Release 2.

Entire containers can be backed up in single run, regardless of how many databases they contain.

Upgrade one container database and all pluggable databases are upgraded.

New Commands
create pluggable database ...
alter pluggable database ...
drop pluggable database ...

New Views/Packages in Oracle 12c Release1
dba_pdbs
v$pdbs
cdb_data_files

dbms_pdb

RMAN HOT CLONING


Cloning steps using Rman and hotbackup:

========================================
                          
$ cd $ORACLE_HOME/network/admin
$ vi listener.ora
MY_LIS =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.55.219)(PORT = 6261))
      )
  )
)
SID_LIST_MY_LIS =
         (SID_LIST =
         (SID_DESC =
           (ORACLE_HOME =/u01/app/oracle/product/10.2.0.1.0)
           (SID_NAME =galaxy)
         )
)
$ cd $ORACLE_HOME/network/admin
$ vi tnsnames.ora
to_galaxy =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.55.219)(PORT = 6261))
    )
  (CONNECT_DATA =
    (SERVICE_NAME =galaxy)
  )
)
SQL> create tablespace rmants datafile ‘/u05/davedb/rmants.dbf’ size 200m;
Tablespace created.
SQL>
SQL> create user rms identified by rms default tablespace rmants;
User created.
SQL>
SQL> grant connect,resource,recovery_catalog_owner to rms;
Grant succeeded.
SQL>
$ rman catalog rms/rms
Recovery Manager: Release 10.2.0.1.0 – Production on Fri Aug 3 18:28:27 2007
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to recovery catalog database
RMAN>
RMAN> create catalog;
recovery catalog created
RMAN> exit
$ rman catalog rms/rms target sys/admin@to_galaxy
Recovery Manager: Release 10.2.0.1.0 – Production on Fri Aug 3 19:00:09 2007
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to target database: GALAXY (DBID=3423097528)
connected to recovery catalog database
RMAN>
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN>
RMAN> show all;
starting full resync of recovery catalog
full resync complete
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘/u01/rman/auto/%F’;
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT   ‘/u01/rman/backup_%U’;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT   ‘/u01/rman/backup_%U’;
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM ‘AES128′; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u01/rman/snap/snap.f’;
RMAN>
RMAN>  run {
2>         backup database;
3>         change archivelog all validate;
4>         sql’alter system archive log current’;
5>         backup archivelog all;
6>        }
Starting backup at 08-AUG-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=22 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=21 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/product/davedb/system.dbf
input datafile fno=00005 name=/u01/product/davedb/dict.dbf
channel ORA_DISK_1: starting piece 1 at 08-AUG-07
channel ORA_DISK_2: starting full datafile backupset
channel ORA_DISK_2: specifying datafile(s) in backupset
input datafile fno=00003 name=/u01/product/davedb/sysaux.dbf
input datafile fno=00002 name=/u01/product/davedb/undotbs.dbf
input datafile fno=00004 name=/u01/product/davedb/ts11.dbf
channel ORA_DISK_2: starting piece 1 at 08-AUG-07
channel ORA_DISK_1: finished piece 1 at 08-AUG-07
piece handle=/u01/rman/backup_0mios0bk_1_1 tag=TAG20070808T093756 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_2: finished piece 1 at 08-AUG-07
piece handle=/u01/rman/backup_0nios0bk_1_1 tag=TAG20070808T093756 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07
Finished backup at 08-AUG-07
$ mkdir auxdb
$ cd auxdb
$ mkdir bdump cdump cdump
$ cd $ORACLE_HOME/dbs
$ cp initgalaxy.ora initauxdb.ora
$ vi initauxdb.ora
 Compatible=10.2.0.1.0
Db_name=auxdb
Control_files=/u01/product/auxdb/control01.ctl
Db_block_size=2048
User_dump_dest=/u01/product/auxdb/udump
Core_dump_dest=/u01/product/auxdb/cdump
Background_dump_dest=/u01/product/auxdb/bdump
Shared_pool_size=67232153
Undo_management=auto
Undo_tablespace=undotbs
db_file_name_convert=(‘/u01/product/davedb’,’/u01/product/auxdb’)
log_file_name_convert=(‘u01/product/davedb’,’/u01/product/auxdb’)
remote_login_passwordfile=exclusive
~
~
~
~
“initauxdb.ora” 14L, 382C
$ orapwd file=orapw$ORACLE_SID password=admin entries=5
$
$ vi listener.ora
SID_LIST_MY_LIS =
         (SID_LIST =
         (SID_DESC =
           (ORACLE_HOME =/u01/app/oracle/product/10.2.0.1.0)
           (SID_NAME =galaxy)
         )
         (SID_DESC =
          (ORACLE_HOME =/u01/app/oracle/product/10.2.0.1.0)
          (SID_NAME =auxdb)
         )
)
$ vi tnsnames.ora
auxdb =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL=TCP) (HOST=172.16.55.219)(PORT=6261))
  )
  (CONNECT_DATA =
   (SERVICE_NAME = auxdb)
 )
)
$lsnrctl
LSNRCTL for Linux: Version 10.2.0.1.0 – Production on 08-AUG-2007 08:50:53
Copyright (c) 1991, 2005, Oracle.  All rights reserved.
Welcome to LSNRCTL, type “help” for information.
LSNRCTL> stop MY_LIS
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.55.219)(PORT=6261)))
The command completed successfully
LSNRCTL> start MY_LIS
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.55.219)(PORT=6261)))
Services Summary…
Service “auxdb” has 1 instance(s).
  Instance “auxdb”, status UNKNOWN, has 1 handler(s) for this service…
Service “galaxy” has 1 instance(s).
  Instance “galaxy”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully
LSNRCTL>
$ export ORACLE_SID=auxdb
$ sqlplus “/as sysdba”
SQL*Plus: Release 10.2.0.1.0 – Production on Wed Aug 8 08:59:59 2007
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initauxdb.ora
ORACLE instance started.
Total System Global Area  155189248 bytes
Fixed Size                  1218220 bytes
Variable Size              96471380 bytes
Database Buffers           50331648 bytes
Redo Buffers                7168000 bytes
SQL>
$ rman catalog rms/rms target sys/admin@to_galaxy auxiliary sys/admin@auxdb
Recovery Manager: Release 10.2.0.1.0 – Production on Tue Aug 7 18:09:32 2007
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to target database: GALAXY (DBID=3423097528)
connected to recovery catalog database
connected to auxiliary database: AUXDB (not mounted)
RMAN>duplicate target database to auxdb pfile=$ORACLE_HOME/dbs/initauxdb.ora;
Starting Duplicate Db at 07-AUG-07
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=35 devtype=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: sid=34 devtype=DISK
contents of Memory Script:
{
   set until scn  313251;
   set newname for datafile  1 to
 ”/u01/product/auxdb/system.dbf”;
   set newname for datafile  2 to
 ”/u01/product/auxdb/undotbs.dbf”;
   set newname for datafile  3 to
 ”/u01/product/auxdb/sysaux.dbf”;
   set newname for datafile  4 to
 ”/u01/product/auxdb/ts11.dbf”;
   set newname for datafile  5 to
 ”/u01/product/auxdb/dict.dbf”;
   restore
   check readonly
   clone database
   ;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 07-AUG-07
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u01/product/auxdb/system.dbf
restoring datafile 00005 to /u01/product/auxdb/dict.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/rman/backup_0fioq9kk_1_1
channel ORA_AUX_DISK_2: starting datafile backupset restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
restoring datafile 00002 to /u01/product/auxdb/undotbs.dbf
restoring datafile 00003 to /u01/product/auxdb/sysaux.dbf
restoring datafile 00004 to /u01/product/auxdb/ts11.dbf
channel ORA_AUX_DISK_2: reading from backup piece /u01/rman/backup_0gioq9kk_1_1
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/u01/rman/backup_0fioq9kk_1_1 tag=TAG20070807T180403
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_AUX_DISK_2: restored backup piece 1
piece handle=/u01/rman/backup_0gioq9kk_1_1 tag=TAG20070807T180403
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:07
Finished restore at 07-AUG-07
sql statement: CREATE CONTROLFILE REUSE SET DATABASE “AUXDB” RESETLOGS ARCHIVELOG
  MAXLOGFILES      5
  MAXLOGMEMBERS      5
  MAXDATAFILES      100
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( ‘/u01/product/auxdb/redolog1a.dbf’ ) SIZE 10 M  REUSE,
  GROUP  2 ( ‘/u01/product/auxdb/redolog2b.dbf’ ) SIZE 10 M  REUSE
 DATAFILE
  ‘/u01/product/auxdb/system.dbf’
 CHARACTER SET US7ASCII
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
released channel: ORA_AUX_DISK_1
released channel: ORA_AUX_DISK_2
datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=630007813 filename=/u01/product/auxdb/undotbs.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=630007813 filename=/u01/product/auxdb/sysaux.dbf
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=630007813 filename=/u01/product/auxdb/ts11.dbf
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=630007813 filename=/u01/product/auxdb/dict.dbf
contents of Memory Script:
{
   set until scn  313251;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script
executing command: SET until clause
Starting recover at 07-AUG-07
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=35 devtype=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: sid=37 devtype=DISK
starting media recovery
archive log thread 1 sequence 77 is already on disk as file /u01/product/davedb/arch/1_77_629394680.dbf
archive log thread 1 sequence 78 is already on disk as file /u01/product/davedb/arch/1_78_629394680.dbf
archive log filename=/u01/product/davedb/arch/1_77_629394680.dbf thread=1 sequence=77
archive log filename=/u01/product/davedb/arch/1_78_629394680.dbf thread=1 sequence=78
media recovery complete, elapsed time: 00:00:02
Finished recover at 07-AUG-07
contents of Memory Script:
{
   shutdown clone;
   startup clone nomount pfile= ‘$ORACLE_HOME/dbs/initauxdb.ora’;
}
executing Memory Script
database dismounted
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area     155189248 bytes
Fixed Size                     1218220 bytes
Variable Size                 96471380 bytes
Database Buffers              50331648 bytes
Redo Buffers                   7168000 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE “AUXDB” RESETLOGS ARCHIVELOG
  MAXLOGFILES      5
  MAXLOGMEMBERS      5
  MAXDATAFILES      100
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( ‘/u01/product/auxdb/redolog1a.dbf’ ) SIZE 10 M  REUSE,
  GROUP  2 ( ‘/u01/product/auxdb/redolog2b.dbf’ ) SIZE 10 M  REUSE
 DATAFILE
  ‘/u01/product/auxdb/system.dbf’
 CHARACTER SET US7ASCII
contents of Memory Script:
{
   set newname for tempfile  1 to
 ”/u01/product/auxdb/temp01.dbf”;
   switch clone tempfile all;
   catalog clone datafilecopy  “/u01/product/auxdb/undotbs.dbf”;
   catalog clone datafilecopy  “/u01/product/auxdb/sysaux.dbf”;
   catalog clone datafilecopy  “/u01/product/auxdb/ts11.dbf”;
   catalog clone datafilecopy  “/u01/product/auxdb/dict.dbf”;
   switch clone datafile all;
}
executing Memory Script
executing command: SET NEWNAME
renamed temporary file 1 to /u01/product/auxdb/temp01.dbf in control file
cataloged datafile copy
datafile copy filename=/u01/product/auxdb/undotbs.dbf recid=1 stamp=630007823
cataloged datafile copy
datafile copy filename=/u01/product/auxdb/sysaux.dbf recid=2 stamp=630007823
cataloged datafile copy
datafile copy filename=/u01/product/auxdb/ts11.dbf recid=3 stamp=630007823
cataloged datafile copy
datafile copy filename=/u01/product/auxdb/dict.dbf recid=4 stamp=630007823
datafile 2 switched to datafile copy
input datafile copy recid=1 stamp=630007823 filename=/u01/product/auxdb/undotbs.dbf
datafile 3 switched to datafile copy
input datafile copy recid=2 stamp=630007823 filename=/u01/product/auxdb/sysaux.dbf
datafile 4 switched to datafile copy
input datafile copy recid=3 stamp=630007823 filename=/u01/product/auxdb/ts11.dbf
datafile 5 switched to datafile copy
input datafile copy recid=4 stamp=630007823 filename=/u01/product/auxdb/dict.dbf
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script
 database opened
Finished Duplicate Db at 07-AUG-07
RMAN>
 =========================================================================
using hot backup steps
Posted by yathish k s on March 26, 2010
Below steps helps you in performing database cloning using hot backup
Assumptions:
1. directory structure is different in both source and target servers
2. Oracle version : 10.2.0.4
3. OS version : Linux 5
4. target database name is same as source database name
step 1 :  Take the hot backup of source database
sql> alter database begin backup;
$ copy datafiles to backup location
sql> alter database end backup;
step 2 : Take controlfile trace and pfile or spfile (that was using by the source database)
step 3 : Install Oracle software on another machine (choose “Install only” option in OUI). Don’t create any database
step 4 : Copy all the files (including trace file and pfile or spfile) from source server to target server either using FTP or rcp
Note: Not necessary in copying control files and redologfiles
step 5 : Place pfile or spfile in “dbs” directory on target
step 6 : Copy the remaining files to their respective locations (If any directories are missing, do create them)
step 7 : Connect as sysdba and Startup the database in nomount stage
step 8 : Edit the trace file (that was copied) and generate a create controlfile script from it. Modify the script and specify the new locations of  the files.
step 9 : Execute the controlfile script which will create controlfiles in the location specified in CONTROL_FILES parameter. Once control files are created, database will be forwarded to MOUNT state.
sql> @create_controlfile.sql
step 10 : Finally, Open the database with resetlogs option
sql> alter database open resetlogs;