Madhusappsdba
Migrate Non-ASM Oracle Database to ASM Storage:
Migrate Non-ASM Oracle Database to ASM Storage:
I have created a ASM diskgroup named '+DATA' and will be using the same to store all database files in there.
Step 1 - List the database files need to be migrated to ASM storage:
SET TERMOUT OFF;
COLUMN current_instance NEW_VALUE current_instance NOPRINT;
SELECT rpad(instance_name, 17) current_instance FROM v$instance;
SET TERMOUT ON;
PROMPT
PROMPT +------------------------------------------------------------------------+
PROMPT | Report : Data File Report (all physical files) |
PROMPT | Instance : ¤t_instance |
PROMPT +------------------------------------------------------------------------+
SET ECHO OFF
SET FEEDBACK 6
SET HEADING ON
SET LINESIZE 180
SET PAGESIZE 50000
SET TERMOUT ON
SET TIMING OFF
SET TRIMOUT ON
SET TRIMSPOOL ON
SET VERIFY OFF
CLEAR COLUMNS
CLEAR BREAKS
CLEAR COMPUTES
COLUMN tablespace FORMAT a30 HEADING 'Tablespace Name / File Class'
COLUMN filename FORMAT a75 HEADING 'Filename'
COLUMN filesize FORMAT 9,999,999,999,999 HEADING 'File Size'
COLUMN autoextensible FORMAT a4 HEADING 'Auto'
COLUMN increment_by FORMAT 999,999,999,999 HEADING 'Next'
COLUMN maxbytes FORMAT 999,999,999,999 HEADING 'Max'
BREAK ON report
COMPUTE sum OF filesize ON report
SELECT /*+ ordered */
d.tablespace_name tablespace
, d.file_name filename
, d.bytes filesize
, d.autoextensible autoextensible
, d.increment_by * e.value increment_by
, d.maxbytes maxbytes
FROM
sys.dba_data_files d
, v$datafile v
, (SELECT value
FROM v$parameter
WHERE name = 'db_block_size') e
WHERE
(d.file_name = v.name)
UNION
SELECT
d.tablespace_name tablespace
, d.file_name filename
, d.bytes filesize
, d.autoextensible autoextensible
, d.increment_by * e.value increment_by
, d.maxbytes maxbytes
FROM
sys.dba_temp_files d
, (SELECT value
FROM v$parameter
WHERE name = 'db_block_size') e
UNION
SELECT
'[ ONLINE REDO LOG ]'
, a.member
, b.bytes
, null
, TO_NUMBER(null)
, TO_NUMBER(null)
FROM
v$logfile a
, v$log b
WHERE
a.group# = b.group#
UNION
SELECT
'[ CONTROL FILE ]'
, a.name
, TO_NUMBER(null)
, null
, TO_NUMBER(null)
, TO_NUMBER(null)
FROM
v$controlfile a
ORDER BY 1,2
/
=====================
Tablespace Name / File Class Filename File Size Auto Next Max
------------------------------ -------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX /u01/app/oracle/oradata/testdb/sysaux01.dbf 450,887,680 YES 10,485,760 34,359,721,984
SYSTEM /u01/app/oracle/oradata/testdb/system01.dbf 702,545,920 YES 10,485,760 34,359,721,984
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20,971,520 YES 655,360 34,359,721,984
UNDOTBS1 /u01/app/oracle/oradata/testdb/undotbs01.dbf 31,457,280 YES 5,242,880 34,359,721,984
USERS /u01/app/oracle/oradata/testdb/users01.dbf 5,242,880 YES 1,310,720 34,359,721,984
[ CONTROL FILE ] /u01/app/oracle/flash_recovery_area/testdb/control
02.ctl
[ CONTROL FILE ] /u01/app/oracle/oradata/testdb/control01.ctl
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo01.log 52,428,800
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo02.log 52,428,800
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo03.log 52,428,800
------------------
sum 1,368,391,680
10 rows selected.
Above is the list of all database files these we will be migrating to ASM storage from filesystem.
Step 2 - If your database is running using pfile then just edit the following parameters pointing to appropriate ASM diskgroups in the pfile and shutdown the database normal and start in nomount state.
control_files='=+DATA'
db_create_file_dest='+DATA'
db_recovery_file_dest='+DATA'
and if your database is running using spfile then simply alter the parameters accordingly.
SQL> ALTER SYSTEM SET CONTROL_FILES='+DATA' SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='+DATA' SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+DATA' SCOPE=SPFILE;
System altered.
Step 3 - Shutdown the DB normally and start it up in nomount state.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 841162752 bytes
Fixed Size 1339768 bytes
Variable Size 494931592 bytes
Database Buffers 339738624 bytes
Redo Buffers 5152768 bytes
Step 4 - Restore the control file to ASM diskgroup '+DATA' from its original filesystem location.
[oracle@localhost ~]$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Jun 17 15:57:43 2015
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: TESTDB (not mounted)
RMAN> restore controlfile from '/u01/app/oracle/flash_recovery_area/testdb/control02.ctl';
Starting restore at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+DATA/testdb/controlfile/current.264.882633485
Finished restore at 17-JUN-15
RMAN>
Step 5 - As of now, we have migrated the database controlfile to ASM storage now so lets start the database in MOUNT state.
SQL> alter database mount;
Database altered.
Step 6 - Backup the database as copy to the ASM storage.
RMAN> backup as copy database format '+DATA';
Starting backup at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=22 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/testdb/system01.dbf
output file name=+DATA/testdb/datafile/system.263.882633749 tag=TAG20150617T160228 RECID=1 STAMP=882633766
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/app/oracle/oradata/testdb/sysaux01.dbf
output file name=+DATA/testdb/datafile/sysaux.262.882633775 tag=TAG20150617T160228 RECID=2 STAMP=882633789
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/testdb/undotbs01.dbf
output file name=+DATA/testdb/datafile/undotbs1.269.882633801 tag=TAG20150617T160228 RECID=3 STAMP=882633800
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/testdb/controlfile/backup.259.882633801 tag=TAG20150617T160228 RECID=4 STAMP=882633802
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/testdb/users01.dbf
output file name=+DATA/testdb/datafile/users.258.882633803 tag=TAG20150617T160228 RECID=5 STAMP=882633803
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 17-JUN-15
channel ORA_DISK_1: finished piece 1 at 17-JUN-15
piece handle=+DATA/testdb/backupset/2015_06_17/nnsnf0_tag20150617t160228_0.257.882633807 tag=TAG20150617T160228 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 17-JUN-15
Step 7 - So now we have a copy of our current database at ASM storage now we can switch the database to copy.
RMAN> SWITCH DATABASE TO COPY;
datafile 1 switched to datafile copy "+DATA/testdb/datafile/system.263.882633749"
datafile 2 switched to datafile copy "+DATA/testdb/datafile/sysaux.262.882633775"
datafile 3 switched to datafile copy "+DATA/testdb/datafile/undotbs1.269.882633801"
datafile 4 switched to datafile copy "+DATA/testdb/datafile/users.258.882633803"
RMAN>
Step 8 - Now open the database.
SQL> alter database open;
Database altered.
SQL>
Step 9 - Now create the new temp file and drop the old one so that it is also stored on ASM.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20971520
SQL> alter tablespace temp add tempfile size 50m autoextend on maxsize 2048m;
Tablespace altered.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20971520
TEMP +DATA/testdb/tempfile/temp.256.882634443 52428800
Now drop the old temp file which is stored on Filesystem.
SQL> alter database tempfile '/u01/app/oracle/oradata/testdb/temp01.dbf' drop including datafiles;
Database altered.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP +DATA/testdb/tempfile/temp.256.882634443 52428800
Step 10 - Let's migrate the database redo log files to ASM storage now.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 /u01/app/oracle/oradata/testdb/redo03.log 52428800 CURRENT
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 ACTIVE
1 /u01/app/oracle/oradata/testdb/redo01.log 52428800 INACTIVE
SQL> alter database drop logfile group 1;
Database altered.
SQL> alter database add logfile group 1 size 50m;
Database altered.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 /u01/app/oracle/oradata/testdb/redo03.log 52428800 INACTIVE
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 INACTIVE
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 INACTIVE
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database add logfile group 3 size 50m;
Database altered.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 +DATA/testdb/onlinelog/group_3.282.882635099 52428800 INACTIVE
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 INACTIVE
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 CURRENT
3 +DATA/testdb/onlinelog/group_3.280.882635099 52428800 INACTIVE
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 size 50m;
Database altered.
Now we are all migrated to ASM storage.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 +DATA/testdb/onlinelog/group_3.282.882635099 52428800 INACTIVE
2 +DATA/testdb/onlinelog/group_2.289.882635185 52428800 UNUSED
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 CURRENT
3 +DATA/testdb/onlinelog/group_3.280.882635099 52428800 INACTIVE
2 +DATA/testdb/onlinelog/group_2.290.882635187 52428800 UNUSED
Finally let's verify if all database files are migrated to ASM storage now:
Tablespace Name / File Class Filename File Size Auto Next Max
------------------------------ --------------------------------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX +DATA/testdb/datafile/sysaux.262.882633775 450,887,680 YES 10,485,760 34,359,721,984
SYSTEM +DATA/testdb/datafile/system.263.882633749 702,545,920 YES 10,485,760 34,359,721,984
TEMP +DATA/testdb/tempfile/temp.256.882634443 52,428,800 YES 52,428,800 2,147,483,648
UNDOTBS1 +DATA/testdb/datafile/undotbs1.269.882633801 31,457,280 YES 5,242,880 34,359,721,984
USERS +DATA/testdb/datafile/users.258.882633803 5,242,880 YES 1,310,720 34,359,721,984
[ CONTROL FILE ] +DATA/testdb/controlfile/current.264.882633485
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_1.286.882635039 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_1.287.882635039 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_2.289.882635185 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_2.290.882635187 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_3.280.882635099 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_3.282.882635099 52,428,800
------------------
sum 1,557,135,360
12 rows selected.
Conclusion : Finally we are migrated from Non-ASM storage to ASM now.
Step 1 - List the database files need to be migrated to ASM storage:
SET TERMOUT OFF;
COLUMN current_instance NEW_VALUE current_instance NOPRINT;
SELECT rpad(instance_name, 17) current_instance FROM v$instance;
SET TERMOUT ON;
PROMPT
PROMPT +------------------------------------------------------------------------+
PROMPT | Report : Data File Report (all physical files) |
PROMPT | Instance : ¤t_instance |
PROMPT +------------------------------------------------------------------------+
SET ECHO OFF
SET FEEDBACK 6
SET HEADING ON
SET LINESIZE 180
SET PAGESIZE 50000
SET TERMOUT ON
SET TIMING OFF
SET TRIMOUT ON
SET TRIMSPOOL ON
SET VERIFY OFF
CLEAR COLUMNS
CLEAR BREAKS
CLEAR COMPUTES
COLUMN tablespace FORMAT a30 HEADING 'Tablespace Name / File Class'
COLUMN filename FORMAT a75 HEADING 'Filename'
COLUMN filesize FORMAT 9,999,999,999,999 HEADING 'File Size'
COLUMN autoextensible FORMAT a4 HEADING 'Auto'
COLUMN increment_by FORMAT 999,999,999,999 HEADING 'Next'
COLUMN maxbytes FORMAT 999,999,999,999 HEADING 'Max'
BREAK ON report
COMPUTE sum OF filesize ON report
SELECT /*+ ordered */
d.tablespace_name tablespace
, d.file_name filename
, d.bytes filesize
, d.autoextensible autoextensible
, d.increment_by * e.value increment_by
, d.maxbytes maxbytes
FROM
sys.dba_data_files d
, v$datafile v
, (SELECT value
FROM v$parameter
WHERE name = 'db_block_size') e
WHERE
(d.file_name = v.name)
UNION
SELECT
d.tablespace_name tablespace
, d.file_name filename
, d.bytes filesize
, d.autoextensible autoextensible
, d.increment_by * e.value increment_by
, d.maxbytes maxbytes
FROM
sys.dba_temp_files d
, (SELECT value
FROM v$parameter
WHERE name = 'db_block_size') e
UNION
SELECT
'[ ONLINE REDO LOG ]'
, a.member
, b.bytes
, null
, TO_NUMBER(null)
, TO_NUMBER(null)
FROM
v$logfile a
, v$log b
WHERE
a.group# = b.group#
UNION
SELECT
'[ CONTROL FILE ]'
, a.name
, TO_NUMBER(null)
, null
, TO_NUMBER(null)
, TO_NUMBER(null)
FROM
v$controlfile a
ORDER BY 1,2
/
=====================
Tablespace Name / File Class Filename File Size Auto Next Max
------------------------------ -------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX /u01/app/oracle/oradata/testdb/sysaux01.dbf 450,887,680 YES 10,485,760 34,359,721,984
SYSTEM /u01/app/oracle/oradata/testdb/system01.dbf 702,545,920 YES 10,485,760 34,359,721,984
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20,971,520 YES 655,360 34,359,721,984
UNDOTBS1 /u01/app/oracle/oradata/testdb/undotbs01.dbf 31,457,280 YES 5,242,880 34,359,721,984
USERS /u01/app/oracle/oradata/testdb/users01.dbf 5,242,880 YES 1,310,720 34,359,721,984
[ CONTROL FILE ] /u01/app/oracle/flash_recovery_area/testdb/control
02.ctl
[ CONTROL FILE ] /u01/app/oracle/oradata/testdb/control01.ctl
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo01.log 52,428,800
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo02.log 52,428,800
[ ONLINE REDO LOG ] /u01/app/oracle/oradata/testdb/redo03.log 52,428,800
------------------
sum 1,368,391,680
10 rows selected.
Above is the list of all database files these we will be migrating to ASM storage from filesystem.
Step 2 - If your database is running using pfile then just edit the following parameters pointing to appropriate ASM diskgroups in the pfile and shutdown the database normal and start in nomount state.
control_files='=+DATA'
db_create_file_dest='+DATA'
db_recovery_file_dest='+DATA'
and if your database is running using spfile then simply alter the parameters accordingly.
SQL> ALTER SYSTEM SET CONTROL_FILES='+DATA' SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='+DATA' SCOPE=SPFILE;
System altered.
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+DATA' SCOPE=SPFILE;
System altered.
Step 3 - Shutdown the DB normally and start it up in nomount state.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 841162752 bytes
Fixed Size 1339768 bytes
Variable Size 494931592 bytes
Database Buffers 339738624 bytes
Redo Buffers 5152768 bytes
Step 4 - Restore the control file to ASM diskgroup '+DATA' from its original filesystem location.
[oracle@localhost ~]$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Jun 17 15:57:43 2015
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: TESTDB (not mounted)
RMAN> restore controlfile from '/u01/app/oracle/flash_recovery_area/testdb/control02.ctl';
Starting restore at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: copied control file copy
output file name=+DATA/testdb/controlfile/current.264.882633485
Finished restore at 17-JUN-15
RMAN>
Step 5 - As of now, we have migrated the database controlfile to ASM storage now so lets start the database in MOUNT state.
SQL> alter database mount;
Database altered.
Step 6 - Backup the database as copy to the ASM storage.
RMAN> backup as copy database format '+DATA';
Starting backup at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=22 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/testdb/system01.dbf
output file name=+DATA/testdb/datafile/system.263.882633749 tag=TAG20150617T160228 RECID=1 STAMP=882633766
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/app/oracle/oradata/testdb/sysaux01.dbf
output file name=+DATA/testdb/datafile/sysaux.262.882633775 tag=TAG20150617T160228 RECID=2 STAMP=882633789
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/testdb/undotbs01.dbf
output file name=+DATA/testdb/datafile/undotbs1.269.882633801 tag=TAG20150617T160228 RECID=3 STAMP=882633800
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/testdb/controlfile/backup.259.882633801 tag=TAG20150617T160228 RECID=4 STAMP=882633802
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/testdb/users01.dbf
output file name=+DATA/testdb/datafile/users.258.882633803 tag=TAG20150617T160228 RECID=5 STAMP=882633803
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 17-JUN-15
channel ORA_DISK_1: finished piece 1 at 17-JUN-15
piece handle=+DATA/testdb/backupset/2015_06_17/nnsnf0_tag20150617t160228_0.257.882633807 tag=TAG20150617T160228 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 17-JUN-15
Step 7 - So now we have a copy of our current database at ASM storage now we can switch the database to copy.
RMAN> SWITCH DATABASE TO COPY;
datafile 1 switched to datafile copy "+DATA/testdb/datafile/system.263.882633749"
datafile 2 switched to datafile copy "+DATA/testdb/datafile/sysaux.262.882633775"
datafile 3 switched to datafile copy "+DATA/testdb/datafile/undotbs1.269.882633801"
datafile 4 switched to datafile copy "+DATA/testdb/datafile/users.258.882633803"
RMAN>
Step 8 - Now open the database.
SQL> alter database open;
Database altered.
SQL>
Step 9 - Now create the new temp file and drop the old one so that it is also stored on ASM.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20971520
SQL> alter tablespace temp add tempfile size 50m autoextend on maxsize 2048m;
Tablespace altered.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP /u01/app/oracle/oradata/testdb/temp01.dbf 20971520
TEMP +DATA/testdb/tempfile/temp.256.882634443 52428800
Now drop the old temp file which is stored on Filesystem.
SQL> alter database tempfile '/u01/app/oracle/oradata/testdb/temp01.dbf' drop including datafiles;
Database altered.
SQL> select tablespace_name, file_name, bytes from dba_temp_files;
TABLESPACE_NAME FILE_NAME BYTES
------------------------------ -------------------------------------------------- ----------
TEMP +DATA/testdb/tempfile/temp.256.882634443 52428800
Step 10 - Let's migrate the database redo log files to ASM storage now.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 /u01/app/oracle/oradata/testdb/redo03.log 52428800 CURRENT
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 ACTIVE
1 /u01/app/oracle/oradata/testdb/redo01.log 52428800 INACTIVE
SQL> alter database drop logfile group 1;
Database altered.
SQL> alter database add logfile group 1 size 50m;
Database altered.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 /u01/app/oracle/oradata/testdb/redo03.log 52428800 INACTIVE
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 INACTIVE
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 INACTIVE
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database add logfile group 3 size 50m;
Database altered.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 +DATA/testdb/onlinelog/group_3.282.882635099 52428800 INACTIVE
2 /u01/app/oracle/oradata/testdb/redo02.log 52428800 INACTIVE
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 CURRENT
3 +DATA/testdb/onlinelog/group_3.280.882635099 52428800 INACTIVE
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 size 50m;
Database altered.
Now we are all migrated to ASM storage.
SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;
GROUP# MEMBER BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
3 +DATA/testdb/onlinelog/group_3.282.882635099 52428800 INACTIVE
2 +DATA/testdb/onlinelog/group_2.289.882635185 52428800 UNUSED
1 +DATA/testdb/onlinelog/group_1.287.882635039 52428800 CURRENT
1 +DATA/testdb/onlinelog/group_1.286.882635039 52428800 CURRENT
3 +DATA/testdb/onlinelog/group_3.280.882635099 52428800 INACTIVE
2 +DATA/testdb/onlinelog/group_2.290.882635187 52428800 UNUSED
Finally let's verify if all database files are migrated to ASM storage now:
Tablespace Name / File Class Filename File Size Auto Next Max
------------------------------ --------------------------------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX +DATA/testdb/datafile/sysaux.262.882633775 450,887,680 YES 10,485,760 34,359,721,984
SYSTEM +DATA/testdb/datafile/system.263.882633749 702,545,920 YES 10,485,760 34,359,721,984
TEMP +DATA/testdb/tempfile/temp.256.882634443 52,428,800 YES 52,428,800 2,147,483,648
UNDOTBS1 +DATA/testdb/datafile/undotbs1.269.882633801 31,457,280 YES 5,242,880 34,359,721,984
USERS +DATA/testdb/datafile/users.258.882633803 5,242,880 YES 1,310,720 34,359,721,984
[ CONTROL FILE ] +DATA/testdb/controlfile/current.264.882633485
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_1.286.882635039 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_1.287.882635039 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_2.289.882635185 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_2.290.882635187 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_3.280.882635099 52,428,800
[ ONLINE REDO LOG ] +DATA/testdb/onlinelog/group_3.282.882635099 52,428,800
------------------
sum 1,557,135,360
12 rows selected.
Conclusion : Finally we are migrated from Non-ASM storage to ASM now.
Linux Cron job examples:
Linux Cron job examples:
============================
MIN HOUR DOM MON DOW CMD > /tmp/output.log
Crontab Fields and Allowed Ranges (Linux Crontab Syntax)
Field Description AllowedValue
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed
/tmp/output.log is the command output log file.
1. Scheduling a Job For a Specific Time Every Day
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.
Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.
30 08 10 06 * /home/samimalik/full-backup30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)
The following script take a incremental backup twice a day every day.
This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.
00 11,16 * * * /home/samimalik/bin/incremental-backup00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.
Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.
00 09-18 * * 1-5 /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)
4. How to View Crontab Entries?
View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: This displays crontab of the current logged in user]
View Root Crontab entries
Login as root user (su – root) and do crontab -l as shown below.
root@dev-db# crontab -l
no crontab for rootCrontab HowTo: View Other Linux User’s Crontabs entries
To view crontab entries of other Linux users, login to root and use -u {username} -l as shown below.
root@dev-db# crontab -u sathiya -l
@monthly /home/sathiya/monthly-backup
00 09-18 * * * /home/sathiya/check-db-status
5. How to Edit Crontab Entries?
Edit Current Logged-In User’s Crontab entries
To edit a crontab entries, use crontab -e as shown below. By default this will edit the current logged-in users crontab.
samimalik@dev-db$ crontab -e
@yearly /home/samimalik/centos/bin/annual-maintenance
*/10 * * * * /home/samimalik/debian/bin/check-disk-space
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.
~
"crontab.XXXXyjWkHw" 2L, 83C written
crontab: installing new crontabEdit Root Crontab entries
Login as root user (su – root) and do crontab -e as shown below.
root@dev-db# crontab -eEdit Other Linux User’s Crontab File entries
To edit crontab entries of other Linux users, login to root and use -u {username} -e as shown below.
root@dev-db# crontab -u sathiya -e
@monthly /home/sathiya/fedora/bin/monthly-backup
00 09-18 * * * /home/sathiya/ubuntu/bin/check-db-status
~
~
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
6. Schedule a Job for Every Minute Using Cron.
Ideally you may not have a requirement to schedule a job every minute. But understanding this example will will help you understand the other examples mentioned below in this article.
* * * * * CMDThe * means all the possible unit — i.e every minute of every hour through out the year. More than using this * directly, you will find it very useful in the following cases.
When you specify */5 in minute field means every 5 minutes.
When you specify 0-10/2 in minute field mean every 2 minutes in the first 10 minute.
Thus the above convention can be used for all the other 4 fields.
7. Schedule a Background Cron Job For Every 10 Minutes.
Use the following, if you want to check the disk space every 10 minutes.
*/10 * * * * /home/samimalik/check-disk-spaceIt executes the specified command check-disk-space every 10 minutes through out the year. But you may have a requirement of executing the command only during office hours or vice versa. The above examples shows how to do those things.
Instead of specifying values in the 5 fields, we can specify it using a single keyword as mentioned below.
There are special cases in which instead of the above 5 fields you can use @ followed by a keyword — such as reboot, midnight, yearly, hourly.
Table: Cron special keywords and its meaning Keyword Equivalent
@yearly 0 0 1 1 *
@daily 0 0 * * *
@hourly 0 * * * *
@reboot Run at startup.
8. Schedule a Job For First Minute of Every Year using @yearly
If you want a job to be executed on the first minute of every year, then you can use the @yearly cron keyword as shown below.
This will execute the system annual maintenance using annual-maintenance shell script at 00:00 on Jan 1st for every year.
@yearly /home/samimalik/red-hat/bin/annual-maintenance
9. Schedule a Cron Job Beginning of Every Month using @monthly
It is as similar as the @yearly as above. But executes the command monthly once using @monthly cron keyword.
This will execute the shell script tape-backup at 00:00 on 1st of every month.
@monthly /home/samimalik/suse/bin/tape-backup
10. Schedule a Background Job Every Day using @daily
Using the @daily cron keyword, this will do a daily log file cleanup using cleanup-logs shell scriptat 00:00 on every day.
@daily /home/samimalik/arch-linux/bin/cleanup-logs "day started"
11. How to Execute a Linux Command After Every Reboot using @reboot?
Using the @reboot cron keyword, this will execute the specified command once after the machine got booted every time.
@reboot CMD
12. How to Disable/Redirect the Crontab Mail Output using MAIL keyword?
By default crontab sends the job output to the user who scheduled the job. If you want to redirect the output to a specific user, add or update the MAIL variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
MAIL="samimalik"
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: Crontab of the current logged in user with MAIL variable]
If you wanted the mail not to be sent to anywhere, i.e to stop the crontab output to be emailed, add or update the MAIL variable in the crontab as shown below.
MAIL=""
13. How to Execute a Linux Cron Jobs Every Second Using Crontab.
You cannot schedule a every-second cronjob. Because in cron the minimum unit you can specify is minute. In a typical scenario, there is no reason for most of us to run any job every second in the system.
14. Specify PATH Variable in the Crontab
All the above examples we specified absolute path of the Linux command or the shell-script that needs to be executed.
For example, instead of specifying /home/samimalik/tape-backup, if you want to just specify tape-backup, then add the path /home/samimalik to the PATH variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/samimalik
@yearly annual-maintenance
*/10 * * * * check-disk-space
[Note: Crontab of the current logged in user with PATH variable]15. Installing Crontab From a Cron File
Instead of directly editing the crontab file, you can also add all the entries to a cron-file first. Once you have all thoese entries in the file, you can upload or install them to the cron as shown below.
samimalik@dev-db$ crontab -l
no crontab for samimalik
$ cat cron-file.txt
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
samimalik@dev-db$ crontab cron-file.txt
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-spaceNote: This will install the cron-file.txt to your crontab, which will also remove your old cron entries. So, please be careful while uploading cron entries from a cron-file.txt.
Crontab Fields and Allowed Ranges (Linux Crontab Syntax)
Field Description AllowedValue
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed
/tmp/output.log is the command output log file.
1. Scheduling a Job For a Specific Time Every Day
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.
Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.
30 08 10 06 * /home/samimalik/full-backup30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)
The following script take a incremental backup twice a day every day.
This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.
00 11,16 * * * /home/samimalik/bin/incremental-backup00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.
Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.
00 09-18 * * 1-5 /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)
4. How to View Crontab Entries?
View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: This displays crontab of the current logged in user]
View Root Crontab entries
Login as root user (su – root) and do crontab -l as shown below.
root@dev-db# crontab -l
no crontab for rootCrontab HowTo: View Other Linux User’s Crontabs entries
To view crontab entries of other Linux users, login to root and use -u {username} -l as shown below.
root@dev-db# crontab -u sathiya -l
@monthly /home/sathiya/monthly-backup
00 09-18 * * * /home/sathiya/check-db-status
5. How to Edit Crontab Entries?
Edit Current Logged-In User’s Crontab entries
To edit a crontab entries, use crontab -e as shown below. By default this will edit the current logged-in users crontab.
samimalik@dev-db$ crontab -e
@yearly /home/samimalik/centos/bin/annual-maintenance
*/10 * * * * /home/samimalik/debian/bin/check-disk-space
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.
~
"crontab.XXXXyjWkHw" 2L, 83C written
crontab: installing new crontabEdit Root Crontab entries
Login as root user (su – root) and do crontab -e as shown below.
root@dev-db# crontab -eEdit Other Linux User’s Crontab File entries
To edit crontab entries of other Linux users, login to root and use -u {username} -e as shown below.
root@dev-db# crontab -u sathiya -e
@monthly /home/sathiya/fedora/bin/monthly-backup
00 09-18 * * * /home/sathiya/ubuntu/bin/check-db-status
~
~
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
6. Schedule a Job for Every Minute Using Cron.
Ideally you may not have a requirement to schedule a job every minute. But understanding this example will will help you understand the other examples mentioned below in this article.
* * * * * CMDThe * means all the possible unit — i.e every minute of every hour through out the year. More than using this * directly, you will find it very useful in the following cases.
When you specify */5 in minute field means every 5 minutes.
When you specify 0-10/2 in minute field mean every 2 minutes in the first 10 minute.
Thus the above convention can be used for all the other 4 fields.
7. Schedule a Background Cron Job For Every 10 Minutes.
Use the following, if you want to check the disk space every 10 minutes.
*/10 * * * * /home/samimalik/check-disk-spaceIt executes the specified command check-disk-space every 10 minutes through out the year. But you may have a requirement of executing the command only during office hours or vice versa. The above examples shows how to do those things.
Instead of specifying values in the 5 fields, we can specify it using a single keyword as mentioned below.
There are special cases in which instead of the above 5 fields you can use @ followed by a keyword — such as reboot, midnight, yearly, hourly.
Table: Cron special keywords and its meaning Keyword Equivalent
@yearly 0 0 1 1 *
@daily 0 0 * * *
@hourly 0 * * * *
@reboot Run at startup.
8. Schedule a Job For First Minute of Every Year using @yearly
If you want a job to be executed on the first minute of every year, then you can use the @yearly cron keyword as shown below.
This will execute the system annual maintenance using annual-maintenance shell script at 00:00 on Jan 1st for every year.
@yearly /home/samimalik/red-hat/bin/annual-maintenance
9. Schedule a Cron Job Beginning of Every Month using @monthly
It is as similar as the @yearly as above. But executes the command monthly once using @monthly cron keyword.
This will execute the shell script tape-backup at 00:00 on 1st of every month.
@monthly /home/samimalik/suse/bin/tape-backup
10. Schedule a Background Job Every Day using @daily
Using the @daily cron keyword, this will do a daily log file cleanup using cleanup-logs shell scriptat 00:00 on every day.
@daily /home/samimalik/arch-linux/bin/cleanup-logs "day started"
11. How to Execute a Linux Command After Every Reboot using @reboot?
Using the @reboot cron keyword, this will execute the specified command once after the machine got booted every time.
@reboot CMD
12. How to Disable/Redirect the Crontab Mail Output using MAIL keyword?
By default crontab sends the job output to the user who scheduled the job. If you want to redirect the output to a specific user, add or update the MAIL variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
MAIL="samimalik"
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: Crontab of the current logged in user with MAIL variable]
If you wanted the mail not to be sent to anywhere, i.e to stop the crontab output to be emailed, add or update the MAIL variable in the crontab as shown below.
MAIL=""
13. How to Execute a Linux Cron Jobs Every Second Using Crontab.
You cannot schedule a every-second cronjob. Because in cron the minimum unit you can specify is minute. In a typical scenario, there is no reason for most of us to run any job every second in the system.
14. Specify PATH Variable in the Crontab
All the above examples we specified absolute path of the Linux command or the shell-script that needs to be executed.
For example, instead of specifying /home/samimalik/tape-backup, if you want to just specify tape-backup, then add the path /home/samimalik to the PATH variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/samimalik
@yearly annual-maintenance
*/10 * * * * check-disk-space
[Note: Crontab of the current logged in user with PATH variable]15. Installing Crontab From a Cron File
Instead of directly editing the crontab file, you can also add all the entries to a cron-file first. Once you have all thoese entries in the file, you can upload or install them to the cron as shown below.
samimalik@dev-db$ crontab -l
no crontab for samimalik
$ cat cron-file.txt
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
samimalik@dev-db$ crontab cron-file.txt
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-spaceNote: This will install the cron-file.txt to your crontab, which will also remove your old cron entries. So, please be careful while uploading cron entries from a cron-file.txt.
Cluster db alerts :
Cluster db alerts in Real Time: Data Guard |
Data Guard Fast-Start Failover |
Data Guard Performance |
Data Guard Status |
Database Cardinality |
Database Job Status |
Database Wait Bottlenecks |
Deferred Transactions |
Failed Logins |
Flash Recovery |
Invalid Objects |
Invalid Objects by Schema |
Recovery |
Recovery Area |
Response |
Segment Advisor Recommendations |
Session Suspended |
Snapshot Too Old |
Streams Processes Count |
Suspended Session |
Tablespace Allocation |
Tablespaces Full |
Tablespaces Full (dictionary managed) |
Tablespaces With Problem Segments |
User Block |
Automatic Storage Managemnt Alerts:
Automatic
Storage Managemnt Alerts: ============================ |
Alert Log |
Alert Log Content |
Alert Log Error Status |
Cluster Disk Group Performance |
Cluster Disk Performance |
Database Disk Group Usage |
Disk Group Usage |
Disk Path |
Disk Status |
Instance Disk Group Performance |
Instance Disk Performance |
Offline Disk Count |
Response |
Single Instance Disk Group Performance |
Database Upgradation from 10.2.0.4 to 11.2.0.3 Silent Mode using DBUA
Database Upgradation from 10.2.0.4 to 11.2.0.3 Silent Mode using DBUA:
===============================================
pre upgrade :
===============
connect to the database.
SQL> @/local/oracle/product/rdbms/11g/rdbms/admin/utlu112i.sql
minimum required SYSTEM size: 1652 M
minimum required SYSAUX size: 1047 M
minimum required UNDOTBS size: 400 M
minimum required TEMP size: 60 M
minimum required SGA size : 1076M
PURGE DBA_RECYCLEBIN; // form db level
EXECUTE dbms_stats.gather_dictionary_stats;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlrp.sql
"How to Pre-Process SYS.AUD$ Records Pre-Upgrade From 10.1 or Later to 11.2". My Oracle Support Note 1329590.1
Standard Auditing
SELECT count(*) FROM sys.aud$ WHERE dbid is null;
Standard Auditing, only when Oracle Label Security (OLS) and/or Database Vault (DV) is installed
SELECT count(*) FROM system.aud$ WHERE dbid is null;
Fine Grained Auditing
SELECT count(*) FROM sys.fga_log$ WHERE dbid is null;
copy the password file to all the nodes.
verify that the db is registered in oratab or update it manually
<SID_NAME>:<old oracle home>:N
nrtd:/local/oracle/product/rdbms/10g:N
known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb 1 12:01 oracle
no end user should not connect the database.
ps -ef | grep nrtd |grep -i oracle | grep LOCAL=NO
take the cold backup
stop the database using srvctl
start the database using srvctl
-------------------------------------------------------------------------------------------
Actual upgrade :
==================
export the new home environmental file
export ORACLE_BASE=/local/oracle/prodcut
export ORACLE_SID=nrtdtest1
export ORACLE_HOME=/local/oracle/product/rdbms/11g
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME:$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$PATH
check which dbua it should refer 11g home
/local/oracle/product/rdbms/11g/bin/dbua
dbua -silent dbaname nrtd -oracleHome /local/oracle/product/rdbms/10g
logfile location :
--------------------
/local/oracle/product/cfgtoollogs/dbua/logs
known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb 1 12:01 oracle
post upgrade steps :
========================
ps -ef |grep pmon
change home location in oratab.dat
connect to the database
select COMP_NAME,STATUS,VERSION,MODIFIED from dba_registry
select * from v$version;
select instance_name,host_name,version,status,startup_time from gv$instance;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlu112s.sql // status
===============================================
pre upgrade :
===============
connect to the database.
SQL> @/local/oracle/product/rdbms/11g/rdbms/admin/utlu112i.sql
minimum required SYSTEM size: 1652 M
minimum required SYSAUX size: 1047 M
minimum required UNDOTBS size: 400 M
minimum required TEMP size: 60 M
minimum required SGA size : 1076M
PURGE DBA_RECYCLEBIN; // form db level
EXECUTE dbms_stats.gather_dictionary_stats;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlrp.sql
"How to Pre-Process SYS.AUD$ Records Pre-Upgrade From 10.1 or Later to 11.2". My Oracle Support Note 1329590.1
Standard Auditing
SELECT count(*) FROM sys.aud$ WHERE dbid is null;
Standard Auditing, only when Oracle Label Security (OLS) and/or Database Vault (DV) is installed
SELECT count(*) FROM system.aud$ WHERE dbid is null;
Fine Grained Auditing
SELECT count(*) FROM sys.fga_log$ WHERE dbid is null;
copy the password file to all the nodes.
verify that the db is registered in oratab or update it manually
<SID_NAME>:<old oracle home>:N
nrtd:/local/oracle/product/rdbms/10g:N
known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb 1 12:01 oracle
no end user should not connect the database.
ps -ef | grep nrtd |grep -i oracle | grep LOCAL=NO
take the cold backup
stop the database using srvctl
start the database using srvctl
-------------------------------------------------------------------------------------------
Actual upgrade :
==================
export the new home environmental file
export ORACLE_BASE=/local/oracle/prodcut
export ORACLE_SID=nrtdtest1
export ORACLE_HOME=/local/oracle/product/rdbms/11g
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME:$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$PATH
check which dbua it should refer 11g home
/local/oracle/product/rdbms/11g/bin/dbua
dbua -silent dbaname nrtd -oracleHome /local/oracle/product/rdbms/10g
logfile location :
--------------------
/local/oracle/product/cfgtoollogs/dbua/logs
known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb 1 12:01 oracle
post upgrade steps :
========================
ps -ef |grep pmon
change home location in oratab.dat
connect to the database
select COMP_NAME,STATUS,VERSION,MODIFIED from dba_registry
select * from v$version;
select instance_name,host_name,version,status,startup_time from gv$instance;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlu112s.sql // status
Log Files in RAC Environment:
Log Files in RAC Environment:
=========================
Log Files in RAC Environment:
The Cluster Ready Services Daemon (crsd) Log Files
Log files for the CRSD
process (crsd) can be found in the following directories:
ORA_CRS_HOME/log/hostname/crsd
the crsd.log file is archived every 10MB as
crsd.101, crsd.102 ...
Oracle Cluster Registry (OCR) Log Files
The Oracle Cluster Registry
(OCR) records log information in the following location:
ORA_CRS_HOME/log/hostname/client
You can find CSS
information that the OCSSD generates in log files in the following locations:
ORA_CRS_HOME/log/hostname/cssd
OCSSD responsible for inter node health
monitoring and instance endpoint recovery.
It runs as oracle user.
The cssd.log file is archived every 20MB as
cssd.101, cssd.102....
Subscribe to:
Posts (Atom)