Webogic Admin Challenges and Issues


Madhusappsdba

What is best way to problem handling?

As a Middleware Admin there are many troubles in daily jobs, whenever a problem occurs, You must understand the problem, identify that is it a known or unknown. If it is known issue there could be workaround for that it can be done with knowledge management. As part of best practices you must maintain a issues and their resolutions as Knowledge Management Database KMDB. 

Problem can be understood with the following simple Questions that make you clarity about which area, what could be the reason for the issue.
  • What do we know?
  • What do we need to know?
  • What should we do?
Remember that every problem there must be solution or possible alternative solutions. You need to have enough knowledge to resolve the issue with wisdom. Wisdom gives you the right path to reach right decision to solve the issue.

Problem solving strategy: Problem || Analysis Data -> Information  -> Knowledge -> Wisdom || Solution

Problem vs Issue

Problem that happen accidentally due to any reasons, such as network not in sync, Database server might overloaded or anything. An issue can be transformed into problem when it is not identified in time and not resolved. A problem that need to fix immediately. The impact of the problem can be Life threatening. To make more simple, fever is an issue it can be resolved by pills, Heart Attack is a major Life threatening it is not solved immediately there could be huge loss.

To explore on middleware if you encounter more troubles more you learn. At the same time remember that your Client always wishes that business will run uninterruptedly. You give the 99.9999%  

Bottom line, When the problem is not in Middleware environment then don't play the name blame game. Try to give your best for support the solution providing team. 

Love issues, which make you to learn more about the subject. Enjoy with issues, troubles, you will be enlighted with your problem skills so.......... dear smart WLA rock with them!!!

1. Port Conflict Issue:

While configuring a new WebLogic instance and starting it, that might be get an issue like : "Port already in use". There could be many reasons for this one.
1. on the same machine multiple standalone instances might be running one of the instance already used that port which you have given for new configuration. 
2. apache might be running with the same port.
3. middleware might be running on the same machine with same port 
On Solaris Operating environment we have 2 options:
1. using pfiles command
netstat –na|grep <port> --> identify port in use  
pfiles <pid>|grep -i sockname |grep port --> look for every java process is initialized by startWebLogic.sh or startManagedWebLogic.sh

2. Another way costly one (Third party package) to find the process that is using particular port is :

lsof -i tcp:<Port_number>
This Linux command lsof works for that user to where this command is executed by the user.
3. Best way is perl script using a  method it will check only standard ports which are used by the system.
getservbyport(int port_number, const char *protocol_name) 
Sample perl script goes as follows:
#!/usr/bin/perl
($name, $aliases, $port_number, $protocol_name) = getservbyport(7001, "tcp");
print "Name = $name\n";
print "Aliases = $aliases\n";
print "Port Number = $port_number\n";
print "Protocol Name = $protocol_name\n";

2.  Config.xml Repository Issue

When I copy pasted few of the JMS Queue related configurations from existing WLS config.xml to newly created WLS Domain's config.xml. I got stuck the control at following line in the WebLogic Server logs.
<Jul 28, 2008 12:08:26 PM EDT> <Notice> <Management> <BEA-140005> <Loading domain configuration from configuration repository at /path/
Sol: Verify all the lines in the config.xml file there might be missing any xml tags missing. I have missed out one line need to correct it now it is working fine.

3. Multicast Failure

while running the managed server on a remote server getting the following exception:
<Jul 31, 2008 4:16:16 AM EDT> <Error> <Cluster> <BEA-000109> <An error occurred while sending multicast message: java.net.SocketException: Socket is closed
java.net.SocketException: Socket is closed
        at java.net.DatagramSocket.send(DatagramSocket.java:577)

Sol:
1.  check for the ping happening between the cluster involved machines.
2.  Verify the Multicast IP with the MulticastTest with multiple ips and with ports then use which is communicating the message between two participants.
AppMachine 1: 
$ java utils.MulticastTest -n  app1  -a 237.0.0.5 -p 9999

WebMachine 2: 
$ java utils.MulticastTest -n  app2  -a 237.0.0.5 -p 9999

The messages must exchange between them... otherwise check with Network team.
3. Multcast heart beat can be visible by the following command line:

java weblogic.cluster.MulticastMonitor <multicastIP> <port> mydomain mycluster


4. Application not accessable

While accessing the application url after Web App deployment.
<Aug 19, 2008 10:58:07 AM EDT> <Error> <HTTP> <BEA-101017><[ServletContext(id=29754737,name=webapp1,context-path=/contextpath)] Root cause of ServletException.
java.lang.StackOverflowError

 Sol: Check the classpath it is the communication gap between Servlet and an EJB


5. Editing the WLS password
New password is given in boot.properties when server is in offline.

The Admin server started but while starting the managed instances got the following error:
<Aug 22, 2008 8:21:54 AM EDT> <Critical> <Security> <BEA-090402> <Authentication denied: Boot identity not valid; The user name and/or password from the boot identity file (boot.properties) is not valid. The boot identity may have been changed since the boot identity file was created. Please edit and update the boot identity file with the proper values of username and password. The first time the updated boot identity file is used to start the server, these new values are encrypted.>
********************************************************
The WebLogic Server did not start up properly.
Reason: weblogic.security.SecurityInitializationException: Authentication denied: Boot identity not valid; The user name and/or password from the boot identity file (boot.properties) is not valid. The boot identity may have been changed since the boot identity file was created. Please edit and update the boot identity file with the proper values of username and password. The first time the updated boot identity file is used to start the server, these new values are encrypted.
************************************************************

Sol: Resolved by editing in boot.properties, startManagedWebLogic.sh files for the new password in 'CLEARTEXT' pattern and rename ldap folder in the Managedserver instance folder present in the domain folder.
While starting up the server instance boot.properties will be encrypted and re-written back. after this ldap folder will be generated in the domain directory for the Managed instance where new file will be created with instance name with extention as .ldif
Apache Plug-in configuration
While configuring the Apache using the instructions given in the following link :

6. Deployment Error

 Module Name: apollo, Error: [HTTP:101179][HTTP] Error occurred while parsing descriptor in Web application "/domainpath/application.war" [Path="/source_code_path", URI="applicaiton.war"
org.xml.sax.SAXParseException: Element type "local-path" must be followed by either attribute specifications, ">" or "/>".
Sol:

Check the Deployment Descriptors (here weblogic.xml) edit as per the given Line.


7. JNDI Issue

While deploying application which is having multiple JMS Queues getting as:
data couldn't be saved. The exception is : Error while doSenderQueueLookUp for :QUEUEJNDIUnable to resolve 'QUEUEJNDI' Resolved
Sol:
Please verify the Persistance Store is connected to Connection Pool.


8.  New managed server Configuration issue

Here I am with set of errors when newly configuring WebLogic 9.2 MP3 managed server and adding to a webcluster. The managed server when started first time shown the following :
1. BEA-000362
2. BEA-000386
On Solaris box:
<Mar 30, 2009 2:56:49 PM EDT> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:
There are 1 nested errors:
java.net.UnknownHostException: mywebhostname.com: myhostname.com
        at java.net.InetAddress.getAllByName0(InetAddress.java:1128)
        at java.net.InetAddress.getAllByName0(InetAddress.java:1098)
        at java.net.InetAddress.getAllByName(InetAddress.java:1061)On


On Linux box:
<Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: weblogic.ldap.EmbeddedLDAPException: Unable to open initial replica url: http://myadminhost.com:7001/bea_wls_management_internal2/wl_management
weblogic.ldap.EmbeddedLDAPException: Unable to open initial replica url: http://myadminhost.com:7001/bea_wls_management_internal2/wl_management
        at weblogic.ldap.EmbeddedLDAP.getInitialReplicaFromAdminServer(EmbeddedLDAP.java:1319)
        at weblogic.ldap.EmbeddedLDAP.start(EmbeddedLDAP.java:221)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
java.io.FileNotFoundException: Response: '500: Internal Server Error' for url: 'http://myadminhost.com:7001/bea_wls_management_internal2/wl_management'
        at weblogic.net.http.HttpURLConnection.getInputStream(HttpURLConnection.java:472)
        at weblogic.ldap.EmbeddedLDAP.getInitialReplicaFromAdminServer(EmbeddedLDAP.java:1296)
        at weblogic.ldap.EmbeddedLDAP.start(EmbeddedLDAP.java:221)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
        Truncated. see log file for complete stacktrace

Solution/Work around:
BEA-000362
First thing you need to ensure on your configuration, double check your config.xml file assigned IP or VIP or domain name is correct or not. On the other hand we need to investigate for the network communication happening or not with the hostnames given in the config.xml. ping from managed server box to admin server hosting dns. And also check the /etc/hosts updated with the new host dns. Some times DNS table entries not updated in WAN domains may cause this issue.
BEA-000386
Check for the hostname of the managed server listen-address given one and hosting dns/ip is matching or not using the WebLogic console(this could be a time taking process). 
   Unix aware guy always prefer grep on config.xml :)



Session Replication issues on Web-Tier cluster
BEA-101310 
Web server failed to perform batched update for replicated sesions
ohh my god!!!

 
BEA-000117 
Received a stale replication request
Whenever you encounter this error code in your logs you need to check following:
1. Network connectivity between the cluster member hosting machines.
2. Check all the member servers are alive in that cluster.

BEA-090870 

Security issue while starting Admin Server

Exception:
####<Oct 28, 2010 3:38:41 AM PDT> <Error> <Security> <Unknown> <AdminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <1288262321492> <BEA-090870> <The realm "myrealm" failed to be loaded: weblogic.security.service.SecurityServiceException: java.lang.ExceptionInInitializerError.
weblogic.security.service.SecurityServiceException: java.lang.ExceptionInInitializerError
at weblogic.security.service.CSSWLSDelegateImpl.initializeServiceEngine(CSSWLSDelegateImpl.java:342)
The workaround is suggested as 'Please login to the Database and then run the below SQL file to execute different queries mentioned in this file:
Suppose Database User Login is “weblogic” then login to Database with this credential and run the script under
$BEA_HOME\wlserver_10.3\server\lib\rdbms_security_store_oracle.sqlThen restart your AdminServer'.
BEA-000337 - Stuck Thread
** You can use wise step using jps -v|grep [instance]
First step you need to take thread dump (kill -3 for Unix, CTRL+Break for Windows) make sure one of thread dump analyzer is opend use any thread dump analyzer.
Check the logs where the STUCK found, Check the timing when it came first time, how much time it exceeded configured time? If that time interval seconds exceeded for more than 10 mins and threadpool doesnot have idle threads then stop that WebLogic server instances. 
WebLogic 9.2 MP3  Managed Server cannot start !!??!!
On Red Hat Linux 2.4 the WebLogic managed Server could not start. Tried many times kill the process and start no luck! memory is in comfortable range. CPU don't have load, Network connectivity is also verified, Good condition. WebLogic server log is not writing about why it is not able to start the server instance. 
Finally checked uptime for that box. It is more than a year ago started, thought that rebooting machine might helps.



BEA-000388 -
JVM called WLS shutdown hook. The server will force shutdown now
Issue:
The WebLogic server got shutdown with the following error stack found in the server standard out log files:
####<Sep 7, 2012 3:14:20 PM IST> <Notice> <WebLogicServer> <Ncorp-PLM-08> <Ncorp-PLM-08-AgileServer> <Thread-1> <<WLS Kernel>> <> <> <1347011060768> <BEA-000388> <JVM called WLS shutdown hook. The server will force shutdown now>
####<Sep 7, 2012 3:14:20 PM IST> <Alert> <WebLogicServer> <Ncorp-PLM-08> <Ncorp-PLM-08-AgileServer> <Thread-1> <<WLS Kernel>> <> <> <1347011060768> <BEA-000396> <Server shutdown has been requested by <WLS Kernel>>
####<Sep 7, 2012 3:14:20 PM IST> <Notice> <WebLogicServer> <Ncorp-PLM-08> <Ncorp-PLM-08-AgileServer> <Thread-1> <<WLS Kernel>> <> <> <1347011060771> <BEA-000365> <Server state changed to FORCE_SUSPENDING>
The following could be possible Cause and we have the workaround solution as:
I have the following suggestion for you for the below error:
<Sep 7, 2012 3:14:20 PM IST> <Notice> <WebLogicServer> <Ncorp-PLM-08> <Ncorp-PLM-08-AgileServer> <Thread-1> <<WLS Kernel>> <> <> <1347011060768> <BEA-000388> <JVM called WLS shutdown hook. The server will force shutdown now>
1) It seems like some component is sending the wrong signal to the JVM and this issue is occurring. JVM monitors and catches OS signals, like: CTRL +C event, Log off event, shutdown event.When JVM catches one of the stated above signals, it shutdowns all java processes.
Please try these possible solutions of this problem:
· Specify -Xrs parameter in the JAVA startup arguments and start the admin server
· -Xrs
· Note: -Xrs is a non-standard option developed by Sun Microsystems for their HotSpot JVM. BEA JRockit continues to support this option; however the BEA JRockit non-standard option -XnoHup provides the same functionality. -Xrs reduces usage of operating-system signals by the JVM. If the JVM is run as a service (for example, the servlet engine for a web server), it can receive CTRL_LOGOFF_EVENT but should not initiate shutdown since the operating system will not actually terminate the process. To avoid possible interference such as this, the -Xrs command-line option does not install a console control handler, implying that it does not watch for or process CTRL_C_EVENT, CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT, or CTRL_SHUTDOWN_EVENT Operation.

2) If the issue recurs again even after trying the above option then can you please apply the following JAVA_OPTION:
· Depending on the JVM version, it may be possible to get a thread dump before the process exits
· HotSpot supports the command-line option -XX:+ShowMessageBoxOnError
· The corresponding JRockit option is -Djrockit.waitonerror
· While the JVM< goes down, it may prompt the user: "Do you want to debug the problem?"
· This pauses the JVM, thereby creating an opportunity to generate a thread dump (a stack trace of every thread in the JVM), attach a debugger, or perform some other debugging activity.

BEA-160192

<BEA-160192> <While upgrading weblogic.xml, encountered "max-in-memory-sessions" param "session-param". This param is unknown and will be removed>
This eror you can see in your weblogic logs, Because the deployment descriptor weblogic.xml is not updated with the latest DTD defined for weblogic 11g. 

To resolve this do setWLSEnv.sh and run the following and it will convert it to compatable to WebLogic 11g. But as this is template the basic xml will need to be amended with the 11g DTD and then build script creating the correct file out of it - 

java weblogic.DDConverter -d . <<ear/warfile or webapp/ear directory>>

Migrate Non-ASM Oracle Database to ASM Storage:

Migrate Non-ASM Oracle Database to ASM Storage:

 I have created a ASM diskgroup named '+DATA' and will be using the same to store all database files in there.


Step 1 - List the database files need to be migrated to ASM storage:


SET TERMOUT OFF;
COLUMN current_instance NEW_VALUE current_instance NOPRINT;
SELECT rpad(instance_name, 17) current_instance FROM v$instance;
SET TERMOUT ON;

PROMPT
PROMPT +------------------------------------------------------------------------+
PROMPT | Report   : Data File Report (all physical files)                       |
PROMPT | Instance : &current_instance                                           |
PROMPT +------------------------------------------------------------------------+

SET ECHO        OFF
SET FEEDBACK    6
SET HEADING     ON
SET LINESIZE    180
SET PAGESIZE    50000
SET TERMOUT     ON
SET TIMING      OFF
SET TRIMOUT     ON
SET TRIMSPOOL   ON
SET VERIFY      OFF

CLEAR COLUMNS
CLEAR BREAKS
CLEAR COMPUTES

COLUMN tablespace      FORMAT a30                 HEADING 'Tablespace Name / File Class'
COLUMN filename        FORMAT a75                 HEADING 'Filename'
COLUMN filesize        FORMAT 9,999,999,999,999   HEADING 'File Size'
COLUMN autoextensible  FORMAT a4                  HEADING 'Auto'
COLUMN increment_by    FORMAT 999,999,999,999     HEADING 'Next'
COLUMN maxbytes        FORMAT 999,999,999,999     HEADING 'Max'

BREAK ON report

COMPUTE sum OF filesize  ON report

SELECT /*+ ordered */
    d.tablespace_name                     tablespace
  , d.file_name                           filename
  , d.bytes                               filesize
  , d.autoextensible                      autoextensible
  , d.increment_by * e.value              increment_by
  , d.maxbytes                            maxbytes
FROM
    sys.dba_data_files d
  , v$datafile v
  , (SELECT value
     FROM v$parameter
     WHERE name = 'db_block_size') e
WHERE
  (d.file_name = v.name)
UNION
SELECT
    d.tablespace_name                     tablespace
  , d.file_name                           filename
  , d.bytes                               filesize
  , d.autoextensible                      autoextensible
  , d.increment_by * e.value              increment_by
  , d.maxbytes                            maxbytes
FROM
    sys.dba_temp_files d
  , (SELECT value
     FROM v$parameter
     WHERE name = 'db_block_size') e
UNION
SELECT
    '[ ONLINE REDO LOG ]'
  , a.member
  , b.bytes
  , null
  , TO_NUMBER(null)
  , TO_NUMBER(null)
FROM
    v$logfile a
  , v$log b
WHERE
    a.group# = b.group#
UNION
SELECT
    '[ CONTROL FILE    ]'
  , a.name
  , TO_NUMBER(null)
  , null
  , TO_NUMBER(null)
  , TO_NUMBER(null)
FROM
    v$controlfile a
ORDER BY 1,2
/


=====================
Tablespace Name / File Class   Filename                                                    File Size Auto             Next              Max
------------------------------ -------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX                         /u01/app/oracle/oradata/testdb/sysaux01.dbf               450,887,680 YES        10,485,760   34,359,721,984
SYSTEM                         /u01/app/oracle/oradata/testdb/system01.dbf               702,545,920 YES        10,485,760   34,359,721,984
TEMP                           /u01/app/oracle/oradata/testdb/temp01.dbf                  20,971,520 YES           655,360   34,359,721,984
UNDOTBS1                       /u01/app/oracle/oradata/testdb/undotbs01.dbf               31,457,280 YES         5,242,880   34,359,721,984
USERS                          /u01/app/oracle/oradata/testdb/users01.dbf                  5,242,880 YES         1,310,720   34,359,721,984
[ CONTROL FILE    ]            /u01/app/oracle/flash_recovery_area/testdb/control
                               02.ctl

[ CONTROL FILE    ]            /u01/app/oracle/oradata/testdb/control01.ctl
[ ONLINE REDO LOG ]            /u01/app/oracle/oradata/testdb/redo01.log                  52,428,800
[ ONLINE REDO LOG ]            /u01/app/oracle/oradata/testdb/redo02.log                  52,428,800
[ ONLINE REDO LOG ]            /u01/app/oracle/oradata/testdb/redo03.log                  52,428,800
                                                                                  ------------------
sum                                                                                    1,368,391,680

10 rows selected.


Above is the list of all database files these we will be migrating to ASM storage from filesystem.


Step 2 - If your database is running using pfile then just edit the following parameters pointing to appropriate ASM diskgroups in the pfile and shutdown the database normal and start in nomount state.

control_files='=+DATA'
db_create_file_dest='+DATA'
db_recovery_file_dest='+DATA'

and if your database is running using spfile then simply alter the parameters accordingly.


SQL> ALTER SYSTEM SET CONTROL_FILES='+DATA' SCOPE=SPFILE;

System altered.

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST='+DATA' SCOPE=SPFILE;

System altered.

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+DATA' SCOPE=SPFILE;

System altered.


Step 3 - Shutdown the DB normally and start it up in nomount state.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup nomount;
ORACLE instance started.

Total System Global Area  841162752 bytes
Fixed Size                  1339768 bytes
Variable Size             494931592 bytes
Database Buffers          339738624 bytes
Redo Buffers                5152768 bytes



Step 4 - Restore the control file  to ASM diskgroup '+DATA' from its original filesystem location.


[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Wed Jun 17 15:57:43 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TESTDB (not mounted)



RMAN> restore controlfile from '/u01/app/oracle/flash_recovery_area/testdb/control02.ctl';

Starting restore at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/testdb/controlfile/current.264.882633485
Finished restore at 17-JUN-15

RMAN>


Step 5 - As of now, we have migrated the database controlfile to ASM storage now so lets start the database in MOUNT state.

SQL> alter database mount;

Database altered.



Step 6 - Backup the database as copy to the ASM storage.

RMAN> backup as copy database format '+DATA';

Starting backup at 17-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=22 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/testdb/system01.dbf
output file name=+DATA/testdb/datafile/system.263.882633749 tag=TAG20150617T160228 RECID=1 STAMP=882633766
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u01/app/oracle/oradata/testdb/sysaux01.dbf
output file name=+DATA/testdb/datafile/sysaux.262.882633775 tag=TAG20150617T160228 RECID=2 STAMP=882633789
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/testdb/undotbs01.dbf
output file name=+DATA/testdb/datafile/undotbs1.269.882633801 tag=TAG20150617T160228 RECID=3 STAMP=882633800
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/testdb/controlfile/backup.259.882633801 tag=TAG20150617T160228 RECID=4 STAMP=882633802
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/testdb/users01.dbf
output file name=+DATA/testdb/datafile/users.258.882633803 tag=TAG20150617T160228 RECID=5 STAMP=882633803
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 17-JUN-15
channel ORA_DISK_1: finished piece 1 at 17-JUN-15
piece handle=+DATA/testdb/backupset/2015_06_17/nnsnf0_tag20150617t160228_0.257.882633807 tag=TAG20150617T160228 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 17-JUN-15



Step 7 - So now we have a copy of our current database at ASM storage now we can switch the database to copy.


RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+DATA/testdb/datafile/system.263.882633749"
datafile 2 switched to datafile copy "+DATA/testdb/datafile/sysaux.262.882633775"
datafile 3 switched to datafile copy "+DATA/testdb/datafile/undotbs1.269.882633801"
datafile 4 switched to datafile copy "+DATA/testdb/datafile/users.258.882633803"

RMAN>



Step 8 - Now open the database.


SQL> alter database open;

Database altered.

SQL>


Step 9 - Now create the new temp file and drop the old one so that it is also stored on ASM.


SQL> select tablespace_name, file_name, bytes from dba_temp_files;

TABLESPACE_NAME                FILE_NAME                                               BYTES
------------------------------ -------------------------------------------------- ----------
TEMP                           /u01/app/oracle/oradata/testdb/temp01.dbf            20971520


SQL> alter tablespace temp add tempfile size 50m autoextend on maxsize 2048m;

Tablespace altered.

SQL> select tablespace_name, file_name, bytes from dba_temp_files;

TABLESPACE_NAME                FILE_NAME                                               BYTES
------------------------------ -------------------------------------------------- ----------
TEMP                           /u01/app/oracle/oradata/testdb/temp01.dbf            20971520
TEMP                           +DATA/testdb/tempfile/temp.256.882634443             52428800


Now drop the old temp file which is stored on Filesystem.


SQL> alter database tempfile '/u01/app/oracle/oradata/testdb/temp01.dbf' drop including datafiles;

Database altered.

SQL> select tablespace_name, file_name, bytes from dba_temp_files;

TABLESPACE_NAME                FILE_NAME                                               BYTES
------------------------------ -------------------------------------------------- ----------
TEMP                           +DATA/testdb/tempfile/temp.256.882634443             52428800



Step 10 - Let's migrate the database redo log files to ASM storage now.

SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;

    GROUP# MEMBER                                                  BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
         3 /u01/app/oracle/oradata/testdb/redo03.log            52428800 CURRENT
         2 /u01/app/oracle/oradata/testdb/redo02.log            52428800 ACTIVE
         1 /u01/app/oracle/oradata/testdb/redo01.log            52428800 INACTIVE



SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database add logfile group 1 size 50m;

Database altered.



SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;

    GROUP# MEMBER                                                  BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
         3 /u01/app/oracle/oradata/testdb/redo03.log            52428800 INACTIVE
         2 /u01/app/oracle/oradata/testdb/redo02.log            52428800 CURRENT
         1 +DATA/testdb/onlinelog/group_1.287.882635039         52428800 INACTIVE
         1 +DATA/testdb/onlinelog/group_1.286.882635039         52428800 INACTIVE

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database add logfile group 3 size 50m;

Database altered.



SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;

    GROUP# MEMBER                                                  BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
         3 +DATA/testdb/onlinelog/group_3.282.882635099         52428800 INACTIVE
         2 /u01/app/oracle/oradata/testdb/redo02.log            52428800 INACTIVE
         1 +DATA/testdb/onlinelog/group_1.287.882635039         52428800 CURRENT
         1 +DATA/testdb/onlinelog/group_1.286.882635039         52428800 CURRENT
         3 +DATA/testdb/onlinelog/group_3.280.882635099         52428800 INACTIVE

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database add logfile group 2 size 50m;

Database altered.



Now we are all migrated to ASM storage.


SQL> select a.group#, a.member, b.bytes, b.status from v$logfile a, v$log b where a.group# = b.group#;

    GROUP# MEMBER                                                  BYTES STATUS
---------- -------------------------------------------------- ---------- ----------------
         3 +DATA/testdb/onlinelog/group_3.282.882635099         52428800 INACTIVE
         2 +DATA/testdb/onlinelog/group_2.289.882635185         52428800 UNUSED
         1 +DATA/testdb/onlinelog/group_1.287.882635039         52428800 CURRENT
         1 +DATA/testdb/onlinelog/group_1.286.882635039         52428800 CURRENT
         3 +DATA/testdb/onlinelog/group_3.280.882635099         52428800 INACTIVE
         2 +DATA/testdb/onlinelog/group_2.290.882635187         52428800 UNUSED




Finally let's verify if all database files are migrated to ASM storage now:



Tablespace Name / File Class   Filename                                                                             File Size Auto             Next           Max
------------------------------ --------------------------------------------------------------------------- ------------------ ---- ---------------- ----------------
SYSAUX                         +DATA/testdb/datafile/sysaux.262.882633775                                         450,887,680 YES        10,485,760   34,359,721,984
SYSTEM                         +DATA/testdb/datafile/system.263.882633749                                         702,545,920 YES        10,485,760   34,359,721,984
TEMP                           +DATA/testdb/tempfile/temp.256.882634443                                            52,428,800 YES        52,428,800    2,147,483,648
UNDOTBS1                       +DATA/testdb/datafile/undotbs1.269.882633801                                        31,457,280 YES         5,242,880   34,359,721,984
USERS                          +DATA/testdb/datafile/users.258.882633803                                            5,242,880 YES         1,310,720   34,359,721,984
[ CONTROL FILE    ]            +DATA/testdb/controlfile/current.264.882633485
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_1.286.882635039                                        52,428,800
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_1.287.882635039                                        52,428,800
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_2.289.882635185                                        52,428,800
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_2.290.882635187                                        52,428,800
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_3.280.882635099                                        52,428,800
[ ONLINE REDO LOG ]            +DATA/testdb/onlinelog/group_3.282.882635099                                        52,428,800
                                                                                                           ------------------
sum                                                                                                             1,557,135,360

12 rows selected.


Conclusion : Finally we are migrated from Non-ASM storage to ASM now.

Linux Cron job examples:


Linux Cron job examples:
============================

MIN HOUR DOM MON DOW CMD > /tmp/output.log

Crontab Fields and Allowed Ranges (Linux Crontab Syntax)

Field Description AllowedValue
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed

/tmp/output.log is the command output log file.

1. Scheduling a Job For a Specific Time Every Day
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.

Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.

30 08 10 06 * /home/samimalik/full-backup30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week

2. Schedule a Job For More Than One Instance (e.g. Twice a Day)
The following script take a incremental backup twice a day every day.

This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.

00 11,16 * * * /home/samimalik/bin/incremental-backup00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week

3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.

Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m

00 09-18 * * * /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.

00 09-18 * * 1-5 /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)

4. How to View Crontab Entries?
View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.

samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space

[Note: This displays crontab of the current logged in user]

View Root Crontab entries

Login as root user (su – root) and do crontab -l as shown below.

root@dev-db# crontab -l

no crontab for rootCrontab HowTo: View Other Linux User’s Crontabs entries
To view crontab entries of other Linux users, login to root and use -u {username} -l as shown below.

root@dev-db# crontab -u sathiya -l
@monthly /home/sathiya/monthly-backup
00 09-18 * * * /home/sathiya/check-db-status

5. How to Edit Crontab Entries?
Edit Current Logged-In User’s Crontab entries
To edit a crontab entries, use crontab -e as shown below. By default this will edit the current logged-in users crontab.

samimalik@dev-db$ crontab -e
@yearly /home/samimalik/centos/bin/annual-maintenance
*/10 * * * * /home/samimalik/debian/bin/check-disk-space
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C

[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.

~
"crontab.XXXXyjWkHw" 2L, 83C written
crontab: installing new crontabEdit Root Crontab entries
Login as root user (su – root) and do crontab -e as shown below.

root@dev-db# crontab -eEdit Other Linux User’s Crontab File entries
To edit crontab entries of other Linux users, login to root and use -u {username} -e as shown below.

root@dev-db# crontab -u sathiya -e
@monthly /home/sathiya/fedora/bin/monthly-backup
00 09-18 * * * /home/sathiya/ubuntu/bin/check-db-status
~
~
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C

6. Schedule a Job for Every Minute Using Cron.
Ideally you may not have a requirement to schedule a job every minute. But understanding this example will will help you understand the other examples mentioned below in this article.

* * * * * CMDThe * means all the possible unit — i.e every minute of every hour through out the year. More than using this * directly, you will find it very useful in the following cases.

When you specify */5 in minute field means every 5 minutes.
When you specify 0-10/2 in minute field mean every 2 minutes in the first 10 minute.
Thus the above convention can be used for all the other 4 fields.

7. Schedule a Background Cron Job For Every 10 Minutes.
Use the following, if you want to check the disk space every 10 minutes.

*/10 * * * * /home/samimalik/check-disk-spaceIt executes the specified command check-disk-space every 10 minutes through out the year. But you may have a requirement of executing the command only during office hours or vice versa. The above examples shows how to do those things.

Instead of specifying values in the 5 fields, we can specify it using a single keyword as mentioned below.

There are special cases in which instead of the above 5 fields you can use @ followed by a keyword — such as reboot, midnight, yearly, hourly.

Table: Cron special keywords and its meaning Keyword Equivalent
@yearly 0 0 1 1 *
@daily 0 0 * * *
@hourly 0 * * * *
@reboot Run at startup.


8. Schedule a Job For First Minute of Every Year using @yearly
If you want a job to be executed on the first minute of every year, then you can use the @yearly cron keyword as shown below.

This will execute the system annual maintenance using annual-maintenance shell script at 00:00 on Jan 1st for every year.

@yearly /home/samimalik/red-hat/bin/annual-maintenance

9. Schedule a Cron Job Beginning of Every Month using @monthly
It is as similar as the @yearly as above. But executes the command monthly once using @monthly cron keyword.

This will execute the shell script tape-backup at 00:00 on 1st of every month.

@monthly /home/samimalik/suse/bin/tape-backup

10. Schedule a Background Job Every Day using @daily
Using the @daily cron keyword, this will do a daily log file cleanup using cleanup-logs shell scriptat 00:00 on every day.

@daily /home/samimalik/arch-linux/bin/cleanup-logs "day started"

11. How to Execute a Linux Command After Every Reboot using @reboot?
Using the @reboot cron keyword, this will execute the specified command once after the machine got booted every time.

@reboot CMD

12. How to Disable/Redirect the Crontab Mail Output using MAIL keyword?
By default crontab sends the job output to the user who scheduled the job. If you want to redirect the output to a specific user, add or update the MAIL variable in the crontab as shown below.

samimalik@dev-db$ crontab -l
MAIL="samimalik"

@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space

[Note: Crontab of the current logged in user with MAIL variable]
If you wanted the mail not to be sent to anywhere, i.e to stop the crontab output to be emailed, add or update the MAIL variable in the crontab as shown below.

MAIL=""

13. How to Execute a Linux Cron Jobs Every Second Using Crontab.
You cannot schedule a every-second cronjob. Because in cron the minimum unit you can specify is minute. In a typical scenario, there is no reason for most of us to run any job every second in the system.

14. Specify PATH Variable in the Crontab
All the above examples we specified absolute path of the Linux command or the shell-script that needs to be executed.

For example, instead of specifying /home/samimalik/tape-backup, if you want to just specify tape-backup, then add the path /home/samimalik to the PATH variable in the crontab as shown below.

samimalik@dev-db$ crontab -l

PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/samimalik

@yearly annual-maintenance
*/10 * * * * check-disk-space

[Note: Crontab of the current logged in user with PATH variable]15. Installing Crontab From a Cron File
Instead of directly editing the crontab file, you can also add all the entries to a cron-file first. Once you have all thoese entries in the file, you can upload or install them to the cron as shown below.

samimalik@dev-db$ crontab -l
no crontab for samimalik

$ cat cron-file.txt
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space

samimalik@dev-db$ crontab cron-file.txt

samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-spaceNote: This will install the cron-file.txt to your crontab, which will also remove your old cron entries. So, please be careful while uploading cron entries from a cron-file.txt.


Cluster db alerts :

Cluster db alerts in Real Time:

Data Guard
Data Guard Fast-Start Failover
Data Guard Performance 
Data Guard Status
Database Cardinality 
Database Job Status
Database Wait Bottlenecks
Deferred Transactions 
Failed Logins
Flash Recovery
Invalid Objects
Invalid Objects by Schema
Recovery
Recovery Area
Response
Segment Advisor Recommendations
Session Suspended
Snapshot Too Old
Streams Processes Count 
Suspended Session
Tablespace Allocation
Tablespaces Full 
Tablespaces Full (dictionary managed) 
Tablespaces With Problem Segments 
User Block 

Automatic Storage Managemnt Alerts:

Automatic Storage Managemnt Alerts:
============================
 
Alert Log
Alert Log Content 
Alert Log Error Status 
Cluster Disk Group Performance 
Cluster Disk Performance
Database Disk Group Usage
Disk Group Usage
Disk Path 
Disk Status
Instance Disk Group Performance 
Instance Disk Performance 
Offline Disk Count
Response
Single Instance Disk Group Performance

Database Upgradation from 10.2.0.4 to 11.2.0.3 Silent Mode using DBUA

Database Upgradation from 10.2.0.4 to 11.2.0.3 Silent Mode using DBUA:
===============================================

pre upgrade :
===============
connect to the database.
SQL> @/local/oracle/product/rdbms/11g/rdbms/admin/utlu112i.sql
minimum required SYSTEM size: 1652 M
minimum required SYSAUX size: 1047 M
minimum required UNDOTBS size: 400 M
minimum required TEMP size: 60 M
minimum required SGA size : 1076M
PURGE DBA_RECYCLEBIN; // form db level
EXECUTE dbms_stats.gather_dictionary_stats;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlrp.sql

"How to Pre-Process SYS.AUD$ Records Pre-Upgrade From 10.1 or Later to 11.2". My Oracle Support Note 1329590.1
Standard Auditing
               SELECT count(*) FROM sys.aud$ WHERE dbid is null;
Standard Auditing, only when Oracle Label Security (OLS) and/or Database Vault (DV) is installed
               SELECT count(*) FROM system.aud$ WHERE dbid is null;
Fine Grained Auditing
               SELECT count(*) FROM sys.fga_log$ WHERE dbid is null;

copy the password file to all the nodes.
verify that the db is registered in oratab or update it manually
<SID_NAME>:<old oracle home>:N
nrtd:/local/oracle/product/rdbms/10g:N

known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb  1 12:01 oracle

no end user should not connect the database.
ps -ef | grep nrtd |grep -i oracle | grep LOCAL=NO

take the cold backup
stop the database using srvctl
start the database using srvctl

-------------------------------------------------------------------------------------------

Actual upgrade :
==================
export the new home environmental file
export ORACLE_BASE=/local/oracle/prodcut
export ORACLE_SID=nrtdtest1
export ORACLE_HOME=/local/oracle/product/rdbms/11g
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME:$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$PATH

check which dbua it should refer 11g home
/local/oracle/product/rdbms/11g/bin/dbua


dbua -silent dbaname nrtd -oracleHome /local/oracle/product/rdbms/10g

logfile location :
--------------------
/local/oracle/product/cfgtoollogs/dbua/logs

known issue :
===============
ora-00205 error in identifying control,check alert log for more info
/local/oracle/product/crs/bin >ls -ltr oracle
/local/oracle/product/crs/bin >chmod 6751 oracle
/local/oracle/product/crs/bin >ls -ltr oracle
-rwsr-s--x 1 oracle oinstall 203974257 Feb  1 12:01 oracle

post upgrade steps :
========================
ps -ef |grep pmon
change home location in oratab.dat
connect to the database
select COMP_NAME,STATUS,VERSION,MODIFIED from dba_registry
select * from v$version;
select instance_name,host_name,version,status,startup_time from gv$instance;
select count(*) from dba_objects where status='INVALID';
@?/rdbms/admin/utlu112s.sql // status 





Log Files in RAC Environment:

Log Files in RAC Environment:
=========================
Log Files in RAC Environment:
The Cluster Ready Services Daemon (crsd) Log Files
Log files for the CRSD process (crsd) can be found in the following directories:
ORA_CRS_HOME/log/hostname/crsd
the crsd.log file is archived every 10MB as crsd.101, crsd.102 ...
ORACLE_CRS_HOME/log//alert.log
Oracle Cluster Registry (OCR) Log Files
The Oracle Cluster Registry (OCR) records log information in the following location:
ORA_CRS_HOME/log/hostname/client
Cluster Synchronization Services (CSS) Log Files
You can find CSS information that the OCSSD generates in log files in the following locations:
ORA_CRS_HOME/log/hostname/cssd
OCSSD responsible for inter node health monitoring and instance endpoint recovery.
It runs as oracle user.  
The cssd.log file is archived every 20MB as cssd.101, cssd.102....