Log Files in RAC Environment:

Log Files in RAC Environment:
=========================
Log Files in RAC Environment:
The Cluster Ready Services Daemon (crsd) Log Files
Log files for the CRSD process (crsd) can be found in the following directories:
ORA_CRS_HOME/log/hostname/crsd
the crsd.log file is archived every 10MB as crsd.101, crsd.102 ...
ORACLE_CRS_HOME/log//alert.log
Oracle Cluster Registry (OCR) Log Files
The Oracle Cluster Registry (OCR) records log information in the following location:
ORA_CRS_HOME/log/hostname/client
Cluster Synchronization Services (CSS) Log Files
You can find CSS information that the OCSSD generates in log files in the following locations:
ORA_CRS_HOME/log/hostname/cssd
OCSSD responsible for inter node health monitoring and instance endpoint recovery.
It runs as oracle user.  
The cssd.log file is archived every 20MB as cssd.101, cssd.102....



Event Manager (EVM) Log Files
Event Manager (EVM) information generated by evmd is recorded in log files in the following locations:
ORA_CRS_HOME/log/hostname/evmd
RACG Log Files
The Oracle RAC high availability trace files are located in the following two locations:
ORA_CRS_HOME/log/hostname/racg
$ORACLE_HOME/log/hostname/racg
Core files are in the sub-directories of the log directories. Each RACG executable has a sub-directory assigned exclusively for that executable. The name of the RACG executable sub-directory is the same as the name of the executable.
VIP Log Files
You can find VIP related log files under following location :
ORA_CRS_HOME/log/nodename/racg

OCR logs
OCR logs (ocrdump, ocrconfig, ocrcheck) log files are stored in $ORA_CRS_HOME/log//client/ directory.

SRVCTL logs
srvctl logs are stored in two locations, $ORA_CRS_HOME/log//client/ and in $ORACLE_HOME/log//client/ directories.

APPS lOG FILES:
==============
Log file location in Oracle Apps 11i/R12
Log file location in Oracle Apps 11i/R12:
===========================================
The following log files location could help you to find-out issues and errors from your application 11i instance.

Database Tier Logs are

Alert Log File location:
$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log

Trace file location:
$ORACLE_HOME/admin/SID_Hostname/udump

Application Tier Logs

Start/Stop script log files location:
$COMMON_TOP/admin/log/CONTEXT_NAME/

OPMN log file location
$ORACLE_HOME/opmn/logs/ipm.log

Apache, Jserv, JVM log files locations:
$IAS_ORACLE_HOME/Apache/Apache/logs/ssl_engine_log
$IAS_ORACLE_HOME/Apache/Apache/logs/ssl_request_log
$IAS_ORACLE_HOME/Apache/Apache/logs/access_log
$IAS_ORACLE_HOME/Apache/Apache/logs/error_log
$IAS_ORACLE_HOME/Apache/JServ/logs

Concurrent log file location:
$APPL_TOP/admin/PROD/log or $APPLLOG/$APPLCSF

Patch log file location:
$APPL_TOP/admin/PROD/log

Worker Log file location:
$APPL_TOP/admin/PROD/log

AutoConfig log files location:
Application Tier:
$APPL_TOP/admin/SID_Hostname/log//DDMMTime/adconfig.log

Database Tier:
$ORACLE_HOME/appsutil/log/SID_Hostname/DDMMTime/adconfig.log

Error log file location:
Application Tier:
$APPL_TOP/admin/PROD/log

Database Tier :
$ORACLE_HOME/appsutil/log/SID_Hostname


In Oracle Applications R12, the log files are located in $LOG_HOME (which translates to $INST_TOP/logs)
Below list of log file locations could be helpful for you:

Concurrent Reqeust related logs
$LOG_HOME/appl/conc - > location for concurrent requests log and out files
$LOG_HOME/appl/admin - > location for mid tier startup scripts log files

Apache Logs (10.1.3 Oracle Home which is equivalent to iAS Oracle Home - Apache, OC4J and OPMN)
$LOG_HOME/ora/10.1.3/Apache - > Location for Apache Error and Access log files
$LOG_HOME/ora/10.1.3/j2ee - > location for j2ee related log files
$LOG_HOME/ora/10.1.3/opmn - > location for opmn related log files

Forms & Reports related logs (10.1.2 Oracle home which is equivalent to 806 Oracle Home)
$LOG_HOME/ora/10.1.2/forms
$LOG_HOME/ora/10.1.2/reports

Startup/Shutdown Log files location:
$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log

Patch log files location:
$APPL_TOP/admin/$SID/log/

Clone and AutoConfig log files location in Oracle E-Business Suite Release 12

Logs for the adpreclone.pl are located:
On the database tier:
RDBMS $ORACLE_HOME/appsutil/log/< context >/StageDBTier_< timestamp >.log

On the application tier:
$INST_TOP/admin/log/StageAppsTier_< timestamp >.log

Where the logs for the admkappsutil.pl are located?
On the application tier:
$INST_TOP/admin/log/MakeAppsUtil_< timestamp >.log
- See more at: http://madhusappsdba.blogspot.in/2014_01_01_archive.html#sthash.NrP74Y3O.dpuf


Move The CRD files from one DG To Different Diskgroup

 

How To Move The Database To Different Diskgroup (Change Diskgroup Redundancy) (Doc ID 438580.1)
Description: https://support.oracle.com/epmos/adf/images/t.gif
In this Document






APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
Oracle Database - Standard Edition - Version 10.1.0.2 to 12.1.0.1 [Release 10.1 to 12.1]
Information in this document applies to any platform.

GOAL

Automatic Storage Management (ASM) is an integrated file system and volume manager expressly built for Oracle database files.

This note is applicable :

1. If you wish to move to different ASM storage / Hardware.
2. If you wish to change the redundancy of the diskgroup

When the diskgroups are created with some redundancy say External,Normal or High then its redundancy cannot be changed. Need to change redundancy can arise if :

- DBA's want to switch from Normal/High Redundancy to External Redundancy due to disk space constraints or due to plans of using External methods of maintaining redundancy (like RAID 10 , etc) .

- Switch to ASM based Redundancy i.e converting from External redundancy to Normal/High Redundancy

This note discusses the steps to change the redundancy of existing diskgroups in ASM.
Note : - Please note that this steps have been tried and tested internally. But we would suggest users to test the steps in a Test Environment before executing them in Production.

Also having a Full Cold Backup for the database is recommended.

SOLUTION

To collect the list of files in ASM with their full path, use the Note 888943.1 named "How to collect the full path name of the files in ASM diskgroups"

There are two ways to perform this:

1. Create a new Diskgroup with desired redundancy and move the existing data to newly created Diskgroup.

2. Drop the existing Diskgroup after backing up data and create a new Diskgroup with desired redundancy.

CASE 1: Create a new diskgroup with desired redundancy and move the existing data to newly created diskgroup.1) If we have extra disk space available,then we can create a new diskgroup and move the files from old diskgroup to it.

-- Initially we have two diskgroup with external redundancy as:
SQL> select state,name from v$asm_diskgroup;

STATE NAME
----------- --------------------
MOUNTED DG2
MOUNTED DG3

2) Create a new diskgroup with normal redundancy as :
SQL > create diskgroup DG1 normal redundancy failgroup <failgroup1_name> disk 'disk1_name' failgroup <failgroup2_name> disk 'disk2_name';

SQL> select state,name,type from v$asm_diskgroup;

STATE NAME TYPE
----------- ------------------- ------
MOUNTED DG2 EXTERN
MOUNTED DG3 EXTERN
MOUNTED DG1 NORMAL

3)Backup the current database as follows:
SQL> show parameter db_name

NAME TYPE VALUE
---------------- ----------- ----------------------------
db_name string orcl10g

SQL> create pfile='d:\initsid.ora' from spfile;

SQL> alter database backup controlfile to '+DG1';

SQL> alter system set control_files='+DG1\ORCL10G\CONTROLFILE\<system generated control file name from diskgroup DG1>' SCOPE=SPFILE;

-- Connect to rman
$ rman target /
RMAN > shutdown immediate;
RMAN > startup nomount;
RMAN> restore controlfile to '<new_diskgroup i.e +DG1>' from '+DG2\ORCL10G\CONTROLFILE\mycontrol.ctl'; (specify the original (old) location of controlfile here)

Mount the database and validate the controlfiles from v$controlfile

RMAN > alter database mount;

RMAN> backup as copy database format '+DG1';

With "BACKUP AS COPY", RMAN copies the files as image copies, bit-for-bit copies of database files created on disk.These are identical to copies of the same files that you can create with operating system commands like cp on Unix or COPY on Windows.However, using BACKUP AS COPY will be recorded in the RMAN repository and RMAN can use them in restore operations.

4)Switch the database to copy. At this moment we are switching to the new Diskgroup
RMAN> switch database to copy;

 A SWITCH is equivalent to using the PL/SQL "alter database rename file" statement.

RMAN> recover database ;             

 This will recover the backup controlfile taken and restored before to be in sync with database/datafiles

RMAN> alter database open resetlogs;


5)Add new tempfile to newly created database.
SQL> alter tablespace TEMP add tempfile '+DG1' SIZE 10M;

Drop any existing tempfile on the old diskgroup
SQL> alter database tempfile '+DG2/orcl10g/tempfile/temp.265.626631119' drop;

6)Find out how many members we have in redolog groups, make sure that we have only one member in each log group.(drop other members).

Suppose we have 3 log groups, then add one member to each log group as following:
SQL> alter database add logfile member '+DG1' to group 1;
SQL> alter database add logfile member '+DG1' to group 2;
SQL> alter database add logfile member '+DG1' to group 3;

Then we can drop the old logfile member from earlier diskgroups as:
SQL> alter database drop logfile member 'complete_name';


7)Use the following query to verify that all the files are moved to new diskgroup with desired redundancy:

SQL> select name from v$controlfile
union
select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select filename from v$block_change_tracking

8) Enable block change tracking using  ALTER DATABASE command.
SQL> alter database enable block change tracking using file ‘<FILE_NAME>’;


Case 2:Drop the existing diskgroup after database backup and create a new diskgroup with desired redundancy.1.Shutdown(immediate) the database and then startup mount. Take a valid RMAN backup of existing database as:
RMAN> backup device type disk format 'd:\backup\%U' database ;

RMAN> backup device type disk format 'd:\backup\%U'archivelog all;

2. Make copy of spfile to accessible location:

SQL> create pfile='d:\initsid.ora' from spfile;

SQL> alter database backup controlfile to 'd:\control.ctl';

3. Shutdown the RDBMS instance
SQL> shutdown immediate


4. Connect to ASM Instance and Drop the existing Diskgroups
SQL> drop diskgroup DG1 including contents;


5. Shutdown ASM Instance;

6.Startup the ASM instance in nomount state  and Create the new ASM diskgroup
SQL>startup nomount
SQL> create diskgroup dg1 external redundancy disk'disk_name';

The new diskgroups name should be same as of previous diskgroup, it will facilitate the RMAN restore process.

7. Connect to the RDBMS instance and startup in nomount state using pfile
startup nomount pfile='d:\initsid.ora'
SQL> create spfile from pfile='d:\initsid.ora'

8. Now restore the controlfile and backup's using RMAN
RMAN > restore controlfile from 'd:\control.ctl';
RMAN> alter database mount;
RMAN> restore database;
RMAN> recover database;
unable to find archive log
archive log thread=1 sequence=4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/05/2007 18:24:32
RMAN-06054: media recovery requesting unknown log: thread 1 seq 4 lowscn
570820

While recovery it will give an error for archive log missing, this is expected we need to open the database with resetlogs as:

RMAN> alter database open resetlogs;

-We also need to change Flash Recovery Area to newly created diskgroup location.
SQL> alter system set db_recovery_file_dest='+DG1' scope=both;

Performance for Concurrent Managers in E-Business Suite
Performance for Concurrent Managers in E-Business Suite
===========================================
APPLIES TO:

Oracle Application Object Library - Version 11.5.0 to 12.1 [Release 11.5 to 12.1]
Information in this document applies to any platform.
PURPOSE

Provide the best practices to achieve better performance for concurrent manager in Oracle E-Business Suite.

Please also visit the new Concurrent Processing Product Information Center (Note 1304305.1) for the latest in CP recommendations and solutions.
SCOPE

Applications DBAs, System Administrators involved  in configuration and administration of Oracle E-Business Suite.
DETAILS

Best Practices for Performance for Concurrent Managers in E-Business Suite
This Document contains 5 topics.

1. Generic Tips
2. Transaction Manager (TM).
3. Parallel Concurrent Processing (PCP) Environment.
4. Tuning Output Post Processor (OPP).
5. Concurrent Processing Server Tuning

Generic Tips

1) Sleep Seconds -  is the number of seconds your Concurrent manager waits between checking the list of pending concurrent requests (concurrent requests waiting to be started). A manager only sleeps if there are no runnable jobs in the queue.

Tip: During peak time, when the number of requests submitted is expected to be high, Set the sleep time to a reasonable wait time(e.g. 30 seconds) dependent on the average run time and to prevent backlog. Otherwise set the sleep time to a high number (e.g. 2 minutes) . This avoids constant polls to check  for new requests.

2) Increase the cache size (number of requests cached) to at least twice the number of target processes.

For example, if a manager's work shift has 1 target process and a cache value of 3, it will read three requests, and try to run those  three requests before reading any new requests.

Tip: Enter a value of 1 when defining a manager that runs long, time-consuming jobs, and a value of 3 or 4 for managers that run small, quick jobs.
This is only guidance and a balance needs to struck in tuning the cache, so with fast jobs you need to cache to get enough work for a few minutes. With slow jobs, a small queue helps should you need to reprioritize requests.

3) Create specialized concurrent managers to dedicate certain process either short or long running programs to avoid queue length.

4) To maximize throughput consider reducing the sleep time of the Conflict Resolution Manager (CRM).  The default value is 60 seconds. You can consider setting to 5 or 10 seconds.

5) Avoid enabling an excessive number of standard or specialized managers. It can degrade the performance due polling on queue tables (FND_CONCURRENT_REQUESTS...). You need to create specialized managers only if there is a real need.

6) Set the system profile option "Concurrent: Force Local Output File Mode" to "Yes" if required  . You need to apply patch 7530490 for R12 (or) 7834670 for 11i to get this profile.

Refer Note.822368.1: Purge Concurrent Request FNDCPPUR Does Not Delete Files From File System or Slow performance

Note:- The profile option "Concurrent: Force Local Output File Mode" is set to "No" by default. After applying the patch, set the profile option to YES will cause FNDCPPUR to always access files on the local file system, hence FNDCPPUR will remove the OS files faster.To enable this feature, All Concurrent Manager nodes must be able to access the output file location via the local filesystem

7) Truncate the reports.log file in log directory.   Refer Note.844976.1 for more details

Truncation of file "reports.log" is a regular maintenance work of Application DBA. Make sure that reports log file size should not increase to its maximum limit of 2 GB. There is no purge program to truncate file "reports.log". This maintenance needs to be done manually and regularly depending on number of concurrent program which uses "reports.log". You can safely truncate "reports.log".

The "reports.log" file can be located under $APPLCSF/$APPLLOG.

8) Ensure "Purge Concurrent Request and/or Manager Data, FNDCPPUR,"  is run at regular intervals with "Entity" parameter as "ALL".  A high number of records in FND_CONCURRENT tables can degrade the performance.

Additionally, the following are very good methods to follow for optimizing the process:
Run the job in hours with low workload. Doing this after hours will lessen the contention on the tables from running against your daily processing.
To get the requests under control, run the FNDCPPUR program with Age=20 or Age=18 would be a good method. That means, all requests older than 18 or 20 days will be purged.
Once the requests are under control, run the FNDCPPUR program with Age=7 to maintain an efficient process. This would solely depend on the level of processing that is performed at your site
9) Ensure that the log/out files are removed from the locations shown below as you run "Purge Concurrent Request and/or Manager Data program".

 $APPLCSF/$APPLLOG
 $APPLCSF/$APPLOUT

In the event that it does not remove the log/out files, over a period of time it will slow down the performance.  Please refer to the following note which suggests the patch which fixes it.

Note.822368.1: Purge Concurrent Request FNDCPPUR Does Not Delete Files From File System or Slow performance


10) Defragment the tables periodically to reclaim unused space / improve performance

FND_CONCURRENT_REQUESTS
FND_CONCURRENT_PROCESSES
FND_CRM_HISTORY
FND_ENV_CONTEXT
FND_TEMP_FILES


HOW TO DEFRAGMENT

10.1) alter table <owner>.<table_name> move;
10.2) Note that, some indexes might become unusable after table is moved, check the index status from dba_indexes for the table moved and rebuild them too as explained in next bullet.

select owner, index_name, status from dba_indexes
where table_owner = upper('&OWNER') and
table_name = upper('&SEGMENT_NAME');
10.3) alter index <owner>.<index_name> rebuild online;
Note: Ensure the tablespace in which the object currently exists has got sufficient space before you move/defragment . Always take backup of the tables before moving the data. It is recommended to perform ths action on Test instance initially then testing thoroughly before performing it on Production instance.

10.4) You will need to collect the statistics for the tables.

For example:
exec fnd_stats.gather_table_stats ('APPLSYS','FND_CONCURRENT_REQUESTS',PERCENT=>99);
11) Ensure you are in the latest code to avoid below known performance & Deaklock issues issues.
Note 1492893.1 R12: Performance Issue When Standard Managers Waiting for "enq: TX - row lock contention" Held By ICM
Note 1060736.1 Deadlock Error During Concurrent Request Termination
Note 866298.1 Concurrent Processing - ORA-00060: Deadlock Detected - UPDATE FND_CONCURRENT_QUEUES
Note 1360118.1 Performance: Concurrent Requests Hang in Pending Status For Long Time
Note 1075684.1 Concurrent Managers are consuming high CPU and memory
Transaction Manager (TM)

12 ) Profile Concurrent:Wait for Available TM  -  Total time to wait for a TM before switchover to next available TM.  Consider setting this  to 1 (second).

13) Ensure enough TMs exist to service the incoming request load.

14) When the load is high, set the following profile to optimum values to achieve better results.

 PO: Approval Timeout Value  -  Total time for workflow call (When initiated from Forms) to time out.

15)  Set the sleep time on the Transaction Manager to a high number (e.g. 10 minutes), this avoids constant polls to check for shutdown requests.
Parallel Concurrent Processing (PCP) Environment

16) If the failover of managers is taking too long refer to Note:551895.1: Failover Of Concurrent Manager Processes Takes More than 30 Minutes

17) Refer NOTE:1389261.1 when you are in the process of implementing PCP.

18) Set profile option 'Concurrent: PCP Instance Check' to 'OFF' if instance-sensitive failover is not required. Setting it to 'ON' means that concurrent managers will fail over to a secondary application tier node if the database instance to which it is connected goes down.

19)Transaction Manager uses DBMS_PIPE to communicate with application session prior to 11i.ATG_PF.H RUP3. DBMS_PIPE in turn uses OS Pipe.We might use Advance Queue(AQ) with 11i.ATG_PF.H RUP3 by setting System Profile Concurrent: TM Transport Type to QUEUE.

Note  Pipes are more efficient but require a Transaction Manager to be running on each DB Instance (RAC). So you might want to use "Queue" for easy maintenance.

20) Add these parameters depends on your Database version

                + _lm_global_posts=TRUE
                + _immediate_commit_propagation=TRUE  (11g RAC)
                + max_commit_propagation_delay=0  (9i RAC)

21) To speed up the PCP Failover ,Tune the below parameters.
Kernel parameters (Find the analogous parameter for your platform)
tcp_keepalive_intvl
tcp_keepalive_probes
tcp_keepalive_time ( Do not set this value to low; since it will then use up your network resources with unnecessary traffic)
DCD (Dead connection detection) setup; sqlnet.ora from the Database Tier
sqlnet.expire_time
Environment Variable at Concurrent Manager Tier.
FDCPTRW
PMON Cycle & Sleep Intervals for ICM (internal Concurrent Manager) setup.
Navigation OAM -> SiteMap -> Monitoring -> Internal Concurrent Manager Link(Under Availability) -> "View Status" -> "Edit ICM Runtime Parameters"
Enable Reviver.
What is FNDREVIVER and How Is It Set? (Document : 466752.1)
Tuning Output Post Processor (OPP)

In order to tune the OPP  to improve performance refer the below Note.
NOTE:1399454.1 Tuning Output Post Processor (OPP) to Improve Performance
Concurrent Processing Server Tuning

1. Any Concurrent Processing (CP) server tuning or load balancing needs are to be addressed by Oracle Consulting. There are way too many site specific factors that needs to be considered for optimum CP throughput: from machine hardware, to user request volume, to required Work Shifts, to programs run time characteristics (long / short running)--not to mention also testing and benchmarking. Such a tasks, is beyond the scope of ATG Support.

ATG support would be glad to investigate a failing manager or program issue; however, CP performance issues due to increased concurrent request volume or due to a new installation needs to be addressed by Oracle Consulting.

2. The "Tuning Concurrent Processing" chapter of the white paper "A Holistic Approach To Performance Tuning Oracle Applications Systems Release 11 and 11i" Note 69565.1 may provide some basic insight. Also reference the "Defining Concurrent Managers" and the "Setting Up and Starting Concurrent Managers" chapters of the "Oracle Applications System Administrator's Guide - Configuration".

3. As per Note 69565.1 "A Holistic Approach to Performance Tuning Oracle Applications Systems", "50% of concurrent processing performance tuning is in the business!"

4. Visit the Concurrent Processing Product Information Center (PIC) Note 1304305.1 for additional performance and setup documentation.

Information Center, Diagnostics, & Community

E-Business Concurrent Processing Information Center Document 1304305.1
Please reference this document regularly to review current offerings for Concurrent Processing needs.
DiagnosticsFor additional help, please refer to one of the following documents on diagnostics to address current needs. Providing diagnostic output on an issue for support when logging a service request is very helpful.
Document 179661.1 for 11i or Document 421245.1 for Rel 12.x
Core Concurrent Processing CommunityVisit the Core Concurrent Processing community for help from industry experts or to share knowledge.
REFERENCES

NOTE:1389261.1 - PCP Concurrent Manager Failover/Failback Does not work When Application Listener is down on Primary Node
NOTE:1060707.1 - Purge Concurrent Requests/Manager Data, FNDCPPUR, Not Removing Files From Filesystem
NOTE:466752.1 - Concurrent Processing - What is FNDREVIVER and How Is It Set?
NOTE:551895.1 - Concurrent Processing - Failover Of Concurrent Manager Processes Takes More than 30 Minutes
NOTE:822368.1 - Concurrent Processing - How To Run the Purge Concurrent Request FNDCPPUR, Which Tables Are Purged, And Known Issues Like Files Are Not Deleted From File System or Slow Performance
NOTE:104452.1 - Concurrent Processing - Troubleshooting Concurrent Manager Issues (Unix specific)
NOTE:1075684.1 - Concurrent Managers are consuming high CPU and memory
NOTE:1360118.1 - Performance: Concurrent Requests Hang in Pending Status For Long Time
NOTE:1060736.1 - Deadlock Error During Concurrent Request Termination
NOTE:866298.1 - Concurrent Processing - ORA-00060: Deadlock Detected - UPDATE FND_CONCURRENT_QUEUES
NOTE:1399454.1 - Tuning Output Post Processor (OPP) to Improve Performance
NOTE:844976.1 - Concurrent Processing - Concurrent Reports Failing With Errors REP-0004,REP-0082 and REP-0104
NOTE:1492893.1 - R12: Performance Issue When Standard Managers Waiting for "enq: TX - row lock contention" Held By ICM
- See more at: http://madhusappsdba.blogspot.in/2014/03/performance-for-concurrent-managers-in.html#sthash.ysJ4v40l.dpuf

After Cloning Oracle Applications And Resetting APPS Password, Discoverer 10g/11g Fails With Error

After Cloning Oracle Applications And Resetting APPS Password,

SYMPTOMS

After successfully cloning an Oracle Applications E-Business Suite instance and changing "APPS" user password, connections to Discoverer fail with error:
"Error:
A connection error has occurred.
- OracleBI Discoverer was unable to authenticate using the password provided. This can happen due
to an invalid password or because the password was lost while using back, forward, or refresh in your browser. Please enter the password again to continue.
- Failed to connect to database - Unable to connect to Oracle Applications database (afscpgcs)".

SOLUTION

To implement the solution, please execute the following steps:
  1. In cloned Oracle Applications instance, login as SYSAdmin and set the Site-Level value for Profile option "Signon Password Case" to "insensitive."
  2. Stop Oracle Applications tier services and concurrent server.
  3. Use FNDCPASS to change APPS/APPLSYS password.
  4. Run autoconfig on Applications tiers so that new APPS password is propagated correctly.
  5. Restart Applications tier and concurrent servers.
  6. Reset "Signon Password Case" back to "Sensitive" again.
  7. After making these changes, connections to Discoverer will succeed without error.




Apps Upgradation From 11.5.10.2 to 12.1.3

If your 11i database size is anywhere from 100 to 150 GB then, upgrade downtime won't too long.  We seen in some environments its below 12 hours for 12.1.1 upgrade and 4 hours for 1213 upgrade.

If the transaction data is above 750 GB, then finishing the 12.1.1 upgrade within 24 hours won't be possible. Hence they will go for selected upgrade.

Typically Upgrade Steps will be,
0. OATM Migration
1. Pre-Upgrade Activities (Functional, Technical, DBA) as per Upgrade Document
2. Database Upgrade
3. 12.1.1 Upgrade
4. Configure Applications and Let concurrent programs complete
5. Post Upgrade Tasks (Functional, Technical, DBA) as per Upgrade Document
6. 12.1.3+HRMS RUP4+ASCP RUP3 Upgrade
7. iAS Upgrade, Java Upgrade, JRE Upgrade
8. Customizations Upgrade (Technical)
9. Upgrade Data Verification, Enhancement Setups (Functional)
10. Shared Apps Tier, MultiNode Configuraiton, etc.,
Now, below tips might be useful if your database is anywhere between 150 GB to 750 GB.
Before Upgrade downtime,
1. Purge all possible Run time tables.
Purge Concurrent Request and/or Manager Data
Purge Obsolete Workflow Runtime Data
You can purge Workflow History of Permanant Presistance Type also. Just get confirmation with My Oracle Support.
Purge Debug Log And System Alerts
Purge Obsolete Generic File Manager Data
Purge Signon Audit data
Purge Inactive Sessions
Purge RX Interface Data
PURGE Messages (OE Processing Messages)
Purge ATP Temp Table
Purge Rule Execution
2. Re org all the tables & indexes effected by above concurrent programs
3. Update FND_LOBS not to index binary data
4. Migrate to OATM way before upgrade downtime. It will reduce your upgrade time significantly.
It not only reduces your database size but also reorgs your table which gives better performance.
5. Diable any Debug, Trace, Diagnostics profile options set.
6. In various occasions, you would have taken taken backup of many tables. Verify and delete them.
7. If you have custom tables which stores data just for history, then compress them and keep.
Just before Upgrade,
1. Put your database in noarchivelog mode before starting the upgrade driver.
Because upgrade driver will generate huge amount of archive logs (Say 250 GB) which will fill up archive log location.
It won't possibly use it for recovery. Instead restore will be easier.
You can re enable archivelog mode after finishing 12.1.3 upgrade driver.
2. Increase your online redo size to 2GB minimum and should have minimum 6 Groups, in case of RAC 4 groups each instance
3. Increase your undo tablespace to 20 GB

4. Increase your TEMP tablespace to 20 GB
5. Increase your APPS_TS_TX_DATA and APPS_TS_TX_IDX tablespace.
In the first test upgrade capture tablespace size at each stage,
before 12.1.1 upgrade,
after 12.1.1 upgrade,
after 12.1.3 upgrade,
before releasing the instance.
And then analyze the difference in tabelspace size to get the exact increase in each tablespace.
5. If your SGA is below 5 GB then increase it from 5 GB to 10 GB if your Physical memory permits
6. If your PGA is below 5 GB then increase it to 5 GB if your Physical memory permits
7. Merge AD CUP patches with AD.B and apply in admode

8. Merge and Apply EBS CUP patches in preinstall mode and then merge it with 12.1.1 upgrade driver.

9. You no need to recompile the invalid objects throughout the upgrade. When any object is accessed at runtime automatically it will get compiled.
So, use adpatch options=nocompiledb from 12.1.1 upgrade driver till the last patch.
Allow object compile only at the last patch

Additionally for the 1211 Upgrade driver, you have to below option
Add  "extension plsql_no compile yes"  to u_merged.drv
   extension patch_type software base
   extension plsql_no compile yes
   extension patchinfo maintpack 12.0.0
Also Comment out below lines (same line will appear twice)
      sql ad patch/115/sql adobjcmp.sql
      sql ad patch/115/sql adobjcmp.sql

10. In adpatch when asked for Batch commit size use 10000. Using the default 1000 will have severe performance problem of upg scripts. A script which took 24 hours wiht batch size 1000 has taken 1 hour 45 minutes with batch commit size 10000
11. Use no.of workers equal to no.of actual physical processors in the database server. Don't count cores as processors. If you have given more workers during patch, anytime you can reduce the no.of workers by quitting them using adctrl.
12. Review the timing report of 12.1.1 Upgrade Driver & 12.1.3+ driver,
        Check for files (Jobs) that  has taken more time
        Check for phases that has taken more time
Compare the timing reports of each test upgrades.
13. Document each worker failure and the solutions for them. So that in the next upgrade you can proactively apply the solution.
14. Resolve all the invalid objects and keep the solutions ready with you.

15. Along with 12.1.3 merge HRMS RUP4, Latest ASCP RUP and all Bug fix patches from first test upgrade and apply.

- See more at: http://madhusappsdba.blogspot.in/2012/04/apps-upgradation-from-115102-to-1213.html#sthash.JbNSLseH.dpuf

No comments: