This
chapter describes scenarios you might encounter while administering your Data
Guard configuration. Each scenario can be adapted to your specific
environment. Table 12-1 lists the scenarios presented in this chapter.
Reference
|
Scenario
|
The
following sections set up the LOG_ARCHIVE_DEST_n initialization
parameter and other related parameters to enable and disable role-specific
archiving:
- Configuring a Primary Database and a
Physical Standby Database
- Configuring a Primary Database and a
Logical Standby Database
- Configuring Both Physical and Logical
Standby Databases
- Verifying the Current VALID_FOR Attribute
Settings for Each Destination
Figure 12-1 shows the chicago primary database,
the boston physical standby database, and the initialization
parameters for each system.
Table 12-2 shows the initialization parameters for the configuration
in Figure 12-1.
Chicago Database (Primary Role)
|
Boston Database (Physical Standby
Database Role)
|
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
'SERVICE=boston
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
'SERVICE=chicago
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/boston/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
The
following table describes the archival processing shown in Figure 12-1:
Chicago Database (Primary Role)
|
Boston Database (Physical Standby
Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archiving of redo data to
the local archived redo log files in /arch1/chicago/.
|
Directs archiving of redo data to
the local archived redo log files in/arch1/boston/.
|
LOG_ARCHIVE_DEST_2
|
Directs transmission of the redo
data to the remote physical standby database boston.
|
Is ignored; valid only
when boston is running in the primary role.
|
STANDBY_ARCHIVE_DEST
|
Is ignored; valid only
when chicago is running in the standby role.
|
Directs archival of redo data to
the archived redo log files in the local directory /arch1/boston/.
|
The
following table describes the archival processing shown in Figure 12-2:
Chicago Database (Physical
Standby Role)
|
Boston Database (Primary Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archiving of redo data to the
local /arch1/chicago/ directory.
|
Directs archiving of redo data to
the local archived redo log files in /arch1/boston/.
|
LOG_ARCHIVE_DEST_2
|
Is ignored; valid only
when chicago is running in the primary role.
|
Directs transmission of redo data
to the remote physical standby destination chicago.
|
STANDBY_ARCHIVE_DEST
|
Directs archiving of redo data to
the archived redo log files in the localdirectory /arch1/chicago/.
|
Is ignored; valid only
when boston is running in the standby role.
|
Figure 12-3 shows the chicago database running in the primary
role, the denver database running in the logical standby role, and
the initialization parameters for each system. Inactive components are grayed
out.
Description of "Figure 12-3 Configuring Destinations for a Primary Database and a Logical Standby Database"
Table 12-3 shows the initialization parameters for the configuration
in Figure 12-3.
Chicago Database (Primary Role)
|
Denver Database (Logical Standby
Database Role)
|
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
'LOCATION=/arch2/chicago/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_3=
'SERVICE=denver
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
DB_UNIQUE_NAME=denver
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/denver/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_2=
'LOCATION=/arch2/denver/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=chicago
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/denver/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
The
following table describes the archival processing shown in Figure 12-3:
Chicago Database (Primary Role)
|
Denver Database (Logical Standby
Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archiving of redo data
generated by the primary database from the local online redo log files to the
local archived redo log files in /arch1/chicago/.
|
Directs archiving of redo data
generated by the logical standby database from the local online redo log
files to the local archived redo log files in /arch1/denver/.
|
LOG_ARCHIVE_DEST_2
|
Is ignored; valid only
when chicago is running in the standby role. (You must configure a
standby redo log on this site to perform switchovers.)
|
Directs archiving of redo data
from the standby redo log files to the local archived redo log files
in /arch2/denver/.
|
LOG_ARCHIVE_DEST_3
|
Directs transmission of redo data
to the remote logical standby destination denver.
|
Is ignored; valid only
when denver is running in the primary role.
|
STANDBY_ARCHIVE_DEST
|
Is ignored; valid only
when chicago is running in the standby role.
|
Directs archiving of redo data
received from the primary database directly to archived redo log files
in /arch2/denver/.
|
Unlike
physical standby databases, logical standby databases are open databases that
generate redo data and have multiple log files (online redo log files, archived
redo log files, and standby redo log files). It is good practice to specify
separate local destinations for:
- Archived redo log files that
store redo data generated by the logical standby database.
In Figure 12-3, this is configured as theLOG_ARCHIVE_DEST_1=LOCATION=/arch1/denver destination.
- Archived redo log files that
store redo data received from the primary database. In Figure 12-3, this is configured as
theLOG_ARCHIVE_DEST_2=LOCATION=/arch2/denver destination.
In Figure 12-3, the STANDBY_ARCHIVE_DEST parameter is configured to
the same location for these purposes:
- If the standby redo log files
fill up, redo data received from the primary database is archived
directly to the archived redo log files in this location (described
in Section 5.7.1).
- If there is an archive gap,
archived redo log files retrieved from other databases are copied to this
location (described in Section 5.8).
Because
the example configurations shown in Figure 12-3 (and Figure 12-4) do not include a physical standby database, the configuration
sets up theLOG_ARCHIVE_DEST_3 destination for switchover with the logical
standby database. Figure 12-4 shows the same configuration after a switchover.
The
following table describes the archival processing shown in Figure 12-4:
Chicago Database (Logical
Standby Role)
|
Denver Database (Primary Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archiving of redo data
generated by the logical standby database from the local online redo log
files to the local archived redo log files in/arch1/chicago/.
|
Directs archiving of redo data
from the local online redo log files to the local archived redo log files
in/arch1/denver/.
|
LOG_ARCHIVE_DEST_2
|
Directs archiving of redo data
from the standby redo log files to the archived redo log file in /arch2/chicago/.
|
Is ignored; valid only
when denver is running in the standby role.
|
LOG_ARCHIVE_DEST_3
|
Is ignored; valid only
when chicago is running in the primary role.
|
Directs transmission of redo data
to the remote logical standby destination chicago.
|
STANDBY_ARCHIVE_DEST
|
Directs archiving of the redo data
received from the primary database directly to the archived redo log files
in /arch2/chicago/.
|
Is ignored; valid only
when denver is running in the standby role.
|
Figure 12-5 shows the chicago database running in the primary
role, the boston database running in the physical standby role, and
the denver database running in the logical standby database role. The
initialization parameters are shown under each system. Components that are
grayed out are inactive for the database's current role. This example assumes
that a switchover would occur only between chicago and boston.
In this configuration, the denver logical standby database is
intended to be a reporting database only; denver will never be the
target of a switchover or run in the primary database role.
Description of "Figure 12-5 Configuring a Primary Database with Physical and Logical Standby Databases"
Table 12-4 shows the initialization parameters for the databases
in Figure 12-5.
Boston Database (Standby Role)
|
Chicago Database (Primary Role)
|
Denver Database (Standby Role)
|
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/boston/
VALID_FOR=(ONLINE_
LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_2=
'SERVICE=denver
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=chicago
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/boston/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/chicago/
VALID_FOR=(ONLINE_
LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago'
LOG_ARCHIVE_DEST_2=
'SERVICE=denver
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_3=
'SERVICE=boston
VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_DEST_STATE_3=ENABLE
STANDBY_ARCHIVE_DEST=/arch1/chicago/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
DB_UNIQUE_NAME=denver
LOG_ARCHIVE_CONFIG=
'DG_CONFIG=(chicago,boston,denver)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/arch1/denver/
VALID_FOR=(ONLINE_
LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_2=
'LOCATION=/arch2/denver/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
DB_UNIQUE_NAME=denver'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
STANDBY_ARCHIVE_DEST=/arch2/denver/
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
|
The
following table describes the archival processing shown in Figure 12-5:
Chicago Database (Primary Role)
|
Boston Database (Standby Role)
|
Denver Database (Standby Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archiving of redo data from
the online redo log files to the local archived redo log files
in/arch1/chicago/.
|
Directs archiving of redo data
from the standby redo log files to the local archived redo log files
in/arch1/boston/.
|
Directs archiving of redo data
generated by the logical standby database from the local online redo log
files to the local archived redo log files in/arch1/denver/.
|
LOG_ARCHIVE_DEST_2
|
Directs transmission of redo data
to the remote logical standby destination denver.
|
Is ignored; valid only
when boston is running in the primary role.
|
Directs archiving of redo data
from the standby redo log files to the local archived redo log files
in/arch2/denver/.
|
LOG_ARCHIVE_DEST_3
|
Directs transmission of redo data
to the remote physical standby destination boston.
|
Is ignored; valid only
when boston is running in the primary role.
|
Is not defined for this database.
|
STANDBY_ARCHIVE_DEST
|
Is ignored; valid only for standby
role.
|
Directs archiving of redo data
received from the primary database directly to archived redo log files
in/arch1/boston/.
|
Directs archiving of redo data
received from the primary database directly to archived redo log files
in/arch2/denver/.
|
Figure 12-6 shows the same configuration after a switchover changes
the chicago database to the standby role and
the boston database to the primary role.
Description of "Figure 12-6 Primary, Physical, and Logical Standby Databases After a Role Transition"
The
following table describes the archival processing shown in Figure 12-6:
Chicago Database (Standby Role)
|
Boston Database (Primary Role)
|
Denver Database (Standby Role)
|
|
LOG_ARCHIVE_DEST_1
|
Directs archival of redo data from
the standby redo log files to the local archived redo log files
in/arch1/chicago/.
|
Directs archival of redo data from
the online redo log files to the local archived redo log files
in/arch1/boston/.
|
Directs archival of redo data
generated by the logical standby database from the local online redo log
files to the local archived redo log files in/arch1/denver/.
|
LOG_ARCHIVE_DEST_2
|
Is ignored; valid only
when chicago is running in the primary role.
|
Directs transmission of redo data
to the remote logical standby destination denver.
|
Directs archival of redo data from
the standby redo log files to the local archived redo log files
in/arch2/denver/.
|
LOG_ARCHIVE_DEST_3
|
Is ignored; valid only
when chicago is running in the primary role.
|
Directs transmission of redo data
to the remote physical standby destination chicago.
|
Is not defined for this database.
|
STANDBY_ARCHIVE_DEST
|
Directs archival of redo data
received from the primary database directly to the archived redo log files
in/arch1/chicago/.
|
Is ignored; valid only for standby
role.
|
Directs archival of redo data
received from the primary database directly to archived redo log files
in/arch2/denver/.
|
To
see whether or not the current VALID_FOR attribute settings are
valid right now for each destination in the Data Guard
configuration, query theV$ARCHIVE_DEST view, as shown in Example 12-1.
SQL> SELECT
DEST_ID,VALID_TYPE,VALID_ROLE,VALID_NOW FROM V$ARCHIVE_DEST;
DEST_ID VALID_TYPE VALID_ROLE VALID_NOW
------- --------------- ------------ ----------------
1 ALL_LOGFILES ALL_ROLES
YES
2 STANDBY_LOGFILE STANDBY_ROLE WRONG
VALID_TYPE
3 ONLINE_LOGFILE STANDBY_ROLE WRONG VALID_ROLE
4 ALL_LOGFILES ALL_ROLES
UNKNOWN
5 ALL_LOGFILES ALL_ROLES
UNKNOWN
6 ALL_LOGFILES ALL_ROLES
UNKNOWN
7 ALL_LOGFILES ALL_ROLES
UNKNOWN
8 ALL_LOGFILES ALL_ROLES
UNKNOWN
9 ALL_LOGFILES ALL_ROLES
UNKNOWN
10 ALL_LOGFILES ALL_ROLES
UNKNOWN
10 rows selected.
In Example 12-1, each line represents one of the ten destinations in the Data
Guard configuration. The first line indicates that
the VALID_FOR attribute forLOG_ARCHIVE_DEST_1 is set to (ALL_LOGFILES,ALL_ROLES),
which is the only keyword pair that is valid at all times.
More
interesting are the second and third lines in the view, which are both
currently invalid, but for different reasons:
- LOG_ARCHIVE_DEST_2 is set
to (STANDBY_LOGFILES,STANDBY_ROLE), but the WRONG
VALID_TYPE is returned because this standby destination does not have
a standby redo log implemented.
- LOG_ARCHIVE_DEST_3 is set
to (ONLINE_LOGFILES,STANDBY_ROLE), but the WRONG
VALID_ROLE is returned because this destination is currently running
in the primary database role.
All
of the other destinations are shown as UNKNOWN, which indicates the
destinations are either undefined or the database is started and mounted but
archiving is not currently taking place. See the V$ARCHIVE_DEST view
in the Oracle Database Reference for information about these and other
columns.
The
following sections provide scenarios that illustrate, in a step-by-step
fashion, how to choose the best available standby database for failover:
- Example: Best Physical Standby Database
for a Failover
- Example: Best Logical Standby Database
for a Failover
See Also:
Section 7.1.2 for general guidelines about selecting a
target standby database for switchover and failover
If
a configuration contains both physical and logical standby databases, Oracle
recommends that you perform the role transition using the best available
physical standby database. This is recommended because:
- A logical standby database
might contain only a subset of the data present in the primary database.
- A role transition involving a
logical standby database requires that any existing physical standby
databases be re-created from a copy of the new primary database (after the
role transition is complete) to continue to participate in the Data Guard
configuration.
Because
of these limitations, a logical standby database should be considered as the
target for a role transition only in the following special situations:
- The configuration contains only
logical standby databases.
- It is critical to fail over a
standby database to the primary role as quickly as possible, and the most
current logical standby database in the configuration is significantly
more current than the most current physical standby database in the
configuration.
Once
you determine whether to use a physical or a logical standby database, the
specific standby database you select as the target for the role transition is
determined by how much of the recent primary database modifications are
available on the standby database. Because the primary database remains
accessible during switchovers, there will be no loss of data, and the choice of
the standby database used during a switchover will only affect the time
required to complete the switchover. For failovers, however, the choice of
standby database might involve trade-off between additional risk of data loss
and the time required to transition a standby database to the primary role.
In
a disaster, the most critical task for the DBA is to determine if it is quicker
and safer to repair the primary database or fail over to a standby database.
When deciding that a failover is necessary and multiple physical standby
databases are configured, the DBA must choose which physical standby database
is the best target for the failover. While there are many environmental factors
that can affect which standby database represents the best choice, this
scenario assumes these things to be equal for emphasizing data loss assessment.
This
scenario begins with a Data Guard configuration consisting of the HQ primary
database and two physical standby databases, SAT and NYC. The HQ database is
operating in maximum availability protection mode, and the standby databases
are each configured with three standby redo log files. See Section 1.4 for more information about the maximum availability
protection mode for physical standby databases.
Table 12-5 provides information about the databases used in this
scenario.
Identifier
|
HQ Database
|
SAT Database
|
NYC Database
|
Location
|
San Francisco
|
Seattle
|
New York City
|
Database name
|
HQ
|
HQ
|
HQ
|
Instance name
|
HQ
|
SAT
|
NYC
|
Initialization parameter file
|
hq_init.ora
|
sat_init.ora
|
nyc_init.ora
|
Control file
|
hq_cf1.f
|
sat_cf1.f
|
nyc_cf1.f
|
Datafile
|
hq_db1.f
|
sat_db1.f
|
nyc_db1.f
|
Redo log file 1
|
hq_log1.f
|
sat_log1.f
|
nyc_log1.f
|
Redo log file 2
|
hq_log2.f
|
sat_log2.f
|
nyc_log2.f
|
Standby redo log file 1
|
hq_srl1.f
|
sat_srl1.f
|
nyc_srl1.f
|
Standby redo log file 2
|
hq_srl2.f
|
sat_srl2.f
|
nyc_srl2.f
|
Standby redo log file 3
|
hq_srl3.f
|
sat_srl3.f
|
nyc_srl3.f
|
Primary protection mode
|
Maximum availability
|
Not applicable
|
Not applicable
|
Standby protection mode
|
Not applicable
|
Maximum availability (synchronous)
|
Maximum performance (asynchronous)
|
Network service name (client
defined)
|
hq_net
|
sat_net
|
nyc_net
|
Listener
|
hq_listener
|
sat_listener
|
nyc_listener
|
Note:
The New York city database is operating in
maximum performance mode because sending redo data synchronously from HQ to NYC
might impact the primary database performance during peak workload periods.
However, the New York City standby database is still considered a viable candidate
for failovers because it uses a standby redo log.
Assume
that an event occurs in San Francisco where the primary site is located, and
the primary site is damaged in such a way that it cannot be repaired in a
timely manner. You must fail over to one of the standby databases. You cannot
assume that the DBA who set up the multiple standby database configuration is
available to decide to which standby database to fail over. Therefore, it is
imperative to have a disaster recovery plan at each standby site, as well as at
the primary site. Each member of the disaster recovery team needs to know about
the disaster recovery plan and be aware of the procedures to follow. This
scenario identifies the information you need when deciding which standby
database should be the target of the failover.
One
method of conveying information to the disaster recovery team is to include
a ReadMe file at each standby site. This ReadMe
file is created and maintained by the DBA and should describe how to:
- Log on to the local Oracle
database as a DBA
- Log on to each system where the
standby databases are located
- Get instructions for going
through firewalls, because there might be firewalls between systems
- Log on to other Oracle
databases as a DBA
- Identify the most up-to-date
standby database
- Perform the standby database
failover
- Configure network settings to
ensure client applications access the new primary database, instead of the
original primary database
When
choosing a standby database, there are two critical considerations: which
standby database received the most recent redo data and which standby database
has applied the most redo.
Follow
these steps to determine which standby database is the best candidate for
failover when only physical standby databases are in the configuration. Always
start with the standby database providing the highest protection level. In this
scenario, the Seattle standby database provides the highest protection level
because it is operating in maximum availability protection mode.
Issue
a SQL statement such as the following:
SQL> CONNECT
SYS/CHANGE_ON_INSTALL AS SYSDBA;
Query
the columns in the V$MANAGED_STANDBY view, as shown:
SQL> SELECT THREAD#,
SEQUENCE#, BLOCK#, BLOCKS
2> FROM V$MANAGED_STANDBY WHERE
STATUS='RECEIVING';
THREAD#
SEQUENCE# BLOCK# BLOCKS
---------- ----------
---------- ----------
1 14 234
16
This
standby database received 249 blocks of redo data from the primary database. To
compute the number of blocks received, add the BLOCKS column value to
the BLOCK# column value, and subtract 1 (because block number 234 is
included in the 16 blocks received).
Note:
Depending on how long the primary database has
been unavailable, the previous query might not return any selected rows because
the RFS process might detect the network disconnection and terminate itself. If
this occurs, it is always best to select a standby database that is configured
to receive the redo data in a synchronous manner.
Step 3 Obtain a list of the
archived redo log files that were applied or are currently pending application
to the SAT database.
Query
the V$ARCHIVED_LOG view:
SQL> SELECT
SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
2> FROM V$ARCVHIVED_LOG ORDER BY
SEQUENCE#;
FILE_NAME SEQUENCE# APP
-------------------------
---------- ---
/oracle/dbs/hq_sat_2.log 2 YES
/oracle/dbs/hq_sat_3.log 3 YES
/oracle/dbs/hq_sat_4.log 4 YES
/oracle/dbs/hq_sat_5.log 5 YES
/oracle/dbs/hq_sat_6.log 6 YES
/oracle/dbs/hq_sat_7.log 7 YES
/oracle/dbs/hq_sat_8.log 8 YES
/oracle/dbs/hq_sat_9.log 9 YES
/oracle/dbs/hq_sat_10.log 10 YES
/oracle/dbs/hq_sat_11.log 11 YES
/oracle/dbs/hq_sat_13.log 13
NO
This
output indicates that archived redo log file 11 was completely applied to the
standby database. (The line for log file 11 in the example output is in bold
typeface to assist you in reading the output. The actual output will not
display bolding.)
Also,
notice the gap in the sequence numbers in the SEQUENCE# column. In
the example, the gap indicates the SAT standby database is missing archived
redo log file number 12.
Issue
a SQL statement such as the following:
SQL> CONNECT
SYS/CHANGE_ON_INSTALL AS SYSDBA;
Query
the columns in the V$MANAGED_STANDBY view as shown:
SQL> SELECT THREAD#,
SEQUENCE#, BLOCK#, BLOCKS
2> FROM V$MANAGED_STANDBY WHERE
STATUS='RECEIVING';
THREAD#
SEQUENCE# BLOCK# BLOCKS
---------- ----------
---------- ----------
1 14 157 93
This
standby database has also received 249 blocks of redo information from the
primary database. To compute the number of blocks received, add
theBLOCKS column value to the BLOCK# column value, and subtract
1 (because block number 157 is included in the 93 blocks received).
Step 6 Obtain a list of the
archived redo log files that were applied or are currently pending application
to the NYC database.
Query
the V$ARCHIVED_LOG view:
SQL> SELECT
SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
2> FROM V$ARCVHIVED_LOG ORDER BY
SEQUENCE#;
FILE_NAME SEQUENCE# APP
-------------------------
---------- ---
/oracle/dbs/hq_nyc_2.log 2 YES
/oracle/dbs/hq_nyc_3.log 3 YES
/oracle/dbs/hq_nyc_4.log 4 YES
/oracle/dbs/hq_nyc_5.log 5 YES
/oracle/dbs/hq_nyc_6.log 6 YES
/oracle/dbs/hq_nyc_7.log 7 YES
/oracle/dbs/hq_nyc_8.log 8
NO
/oracle/dbs/hq_nyc_9.log 9
NO
/oracle/dbs/hq_nyc_10.log 10
NO
/oracle/dbs/hq_nyc_11.log 11
NO
/oracle/dbs/hq_nyc_12.log 12
NO
/oracle/dbs/hq_nyc_13.log 13
NO
This
output indicates that archived redo log file 7 was completely applied to the
standby database. (The line for log file 7 in the example output is in bold
typeface to assist you in reading the output. The actual output will not
display bolding.)
More
redo data was received at this location, but less was applied to the standby
database.
In
most cases, the physical standby database you choose as a failover target
should provide a balance between risk of data loss and time required to perform
the role transition. As you analyze this information to make a decision about
the best failover candidate in this scenario, consider the following:
- For minimal risk of data loss
during a failover, you should choose the NYC database as the best target
standby database because Steps 5 and 6revealed that the NYC site has the most recoverable
redo.
- For minimal primary database
downtime during the failover operation, you should choose the SAT database
as the best target standby database. This database is a more appropriate
candidate because the queries in Steps 2 through 6 reveal that the SAT database applied 5 archived
redo log files more than the NYC database. However, if it is not possible
to obtain and apply a copy of the missing archived redo log file (log 12
in the example), then you will not be able to make the SAT database as
current as you can the NYC database. Therefore, you will lose the
unapplied data (log files 12, 13, and part of log file 14 in the example).
Based
on your business requirements, choose the best target standby database.
- If you chose the SAT database
as the best target based on your business requirements, perform the
following steps:
- Retrieve any missing archived
redo log files using an operating system copy utility. (This example uses
the UNIX cp command). In this case, the SAT database is missing
archived redo log file 12. Because the NYC database received this
archived redo log file, you can copy it from the NYC database to the SAT
database, as follows:
2.
% cp
/net/nyc/oracle/dbs/hq_nyc_12.log /net/sat/oracle/dbs/hq_sat_12.log
- Determine if a partial
archived redo log file exists for the next sequence number. In this
example, the next sequence number should be 14. The following UNIX
command searches the directory on the SAT database for the presence of an
archived redo log file named hq_sat_14.log:
4.
% ls -l
/net/sat/oracle/dbs/hq_sat_14.log
5.
/net/sat/oracle/dbs/hq_sat_14.log:
No such file or directory
Because the SAT standby database is using standby redo log files,
there should not be any partial archived redo log files.
- Register the retrieved
archived redo log file. (There is no need to stop log apply services).
7.
SQL> ALTER DATABASE
REGISTER PHYSICAL LOGFILE '/oracle/dbs/hq_sat_12.log';
- Query
the V$ARCHIVED_LOG view again to make sure the archived redo
log files were successfully applied:
9.
SQL> SELECT
SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
10.
2> FROM V$ARCVHIVED_LOG ORDER BY
SEQUENCE#;
11.
12.
FILE_NAME SEQUENCE# APP
13.
-------------------------
---------- ---
14.
/oracle/dbs/hq_sat_2.log 2 YES
15.
/oracle/dbs/hq_sat_3.log 3 YES
16.
/oracle/dbs/hq_sat_4.log 4 YES
17.
/oracle/dbs/hq_sat_5.log 5 YES
18.
/oracle/dbs/hq_sat_6.log 6 YES
19.
/oracle/dbs/hq_sat_7.log 7 YES
20.
/oracle/dbs/hq_sat_8.log 8 YES
21.
/oracle/dbs/hq_sat_9.log 9 YES
22.
/oracle/dbs/hq_sat_10.log 10 YES
23.
/oracle/dbs/hq_sat_11.log 11 YES
24.
/oracle/dbs/hq_sat_12.log 12 YES
25.
/oracle/dbs/hq_sat_13.log 13 YES
- If you chose the NYC database
as the best target based on your business requirements, perform the
following steps:
- Determine if a partial
archived redo log file exists for the next sequence number. The following
UNIX command searches the directory on the NYC database for the presence
of an archived redo log file named with the next sequence (hq_nyc_14):
2.
% ls -l
/net/nyc/oracle/dbs/hq_nyc_14.log
3.
/net/nyc/oracle/dbs/hq_nyc_14.log:
No such file or directory
Because the NYC standby database is using standby redo log files,
there should not be any partial archived redo log files.
- Start log apply services to
apply the most current log file:
6.
2> DISCONNECT FROM SESSION;
- Query
the V$ARCHIVED_LOG view again to make sure the archived redo
log files were successfully applied:
8.
SQL> SELECT
SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
9.
2> FROM V$ARCVHIVED_LOG ORDER BY
SEQUENCE#;
10.
11.
FILE_NAME SEQUENCE# APP
12.
-------------------------
---------- ---
13.
/oracle/dbs/hq_nyc_2.log 2 YES
14.
/oracle/dbs/hq_nyc_3.log
3 YES
15.
/oracle/dbs/hq_nyc_4.log 4 YES
16.
/oracle/dbs/hq_nyc_5.log 5 YES
17.
/oracle/dbs/hq_nyc_6.log 6 YES
18.
/oracle/dbs/hq_nyc_7.log 7 YES
19.
/oracle/dbs/hq_nyc_8.log 8 YES
20.
/oracle/dbs/hq_nyc_9.log 9 YES
21.
/oracle/dbs/hq_nyc_10.log 10 YES
22.
/oracle/dbs/hq_nyc_11.log 11 YES
23.
/oracle/dbs/hq_nyc_12.log 12
NO
24.
/oracle/dbs/hq_nyc_13.log 13
NO
Applying the archived redo log files might take some time to
complete. Therefore, you must wait until all archived redo log files are
designated as applied, as shown:
SQL> SELECT
SUBSTR(NAME,1,25) FILE_NAME, SEQUENCE#, APPLIED
2> FROM V$ARCVHIVED_LOG ORDER BY
SEQUENCE#;
FILE_NAME SEQUENCE# APP
-------------------------
---------- ---
/oracle/dbs/hq_nyc_2.log 2 YES
/oracle/dbs/hq_nyc_3.log 3 YES
/oracle/dbs/hq_nyc_4.log 4 YES
/oracle/dbs/hq_nyc_5.log 5 YES
/oracle/dbs/hq_nyc_6.log 6 YES
/oracle/dbs/hq_nyc_7.log 7 YES
/oracle/dbs/hq_nyc_8.log 8 YES
/oracle/dbs/hq_nyc_9.log 9 YES
/oracle/dbs/hq_nyc_10.log 10 YES
/oracle/dbs/hq_nyc_11.log 11 YES
/oracle/dbs/hq_nyc_12.log 12 YES
/oracle/dbs/hq_nyc_13.log 13 YES
You
are now ready to stop log apply services and fail over the selected physical
standby database to the primary role.
See Section 7.2.2 for additional information about how to
fail over to a physical standby database.
In
a disaster when only logical standby databases are
available, the critical task is to determine which logical standby database is the best target
for the failover. While there are many environmental factors that can affect
which is the best target standby database, this scenario assumes these things
to be equal for emphasizing data loss assessment. See Section 1.4 for more information about the maximum availability
protection mode for logical standby databases.
This
scenario starts out with a Data Guard configuration consisting of the HQ
primary database and two logical standby databases, SAT and NYC. Table 12-6provides information about each of these databases.
Identifier
|
HQ Database
|
SAT Database
|
NYC Database
|
Location
|
San Francisco
|
Seattle
|
New York City
|
Database name
|
HQ
|
SAT
|
NYC
|
Instance name
|
HQ
|
SAT
|
NYC
|
Initialization parameter file
|
hq_init.ora
|
sat_init.ora
|
nyc_init.ora
|
Control file
|
hq_cf1.f
|
sat_cf1.f
|
nyc_cf1.f
|
Datafile
|
hq_db1.f
|
sat_db1.f
|
nyc_db1.f
|
Redo log file 1
|
hq_log1.f
|
sat_log1.f
|
nyc_log1.f
|
Redo log file 2
|
hq_log2.f
|
sat_log2.f
|
nyc_log2.f
|
Database link (client-defined)
|
hq_link
|
sat_link
|
nyc_link
|
Network service name
(client-defined)
|
hq_net
|
sat_net
|
nyc_net
|
Listener
|
hq_listener
|
sat_listener
|
nyc_listener
|
Follow
these steps to determine which standby database is the best candidate for
failover when only logical standby databases are in the configuration:
Issue
a SQL statement such as the following:
SQL> CONNECT
SYS/CHANGE_ON_INSTALL AS SYSDBA;
Query
the following columns in the V$LOGSTDBY_PROGRESS view:
SQL> SELECT
APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;
APPLIED_SCN LATEST_SCN
----------- ----------
144059
144059
Step 3 Obtain a list of the archived redo log files that
were applied or are currently pending application to the SAT database.
SQL> SELECT
SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",
2> FIRST_CHANGE#, NEXT_CHANGE#,
TO_CHAR(TIMESTAMP, 'HH:MI:SS') TIMESTAMP,
3> DICT_BEGIN BEG, DICT_END END,
SUBSTR(THREAD#,1,4) "THR#"
4> FROM DBA_LOGSTDBY_LOG ORDER BY
SEQUENCE#;
FILE_NAME SEQ# FIRST_CHANGE#
NEXT_CHANGE# TIMESTAM BEG END THR#
-------------------------
---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_sat_2.log 2
101579 101588 11:02:57
NO NO
1
/oracle/dbs/hq_sat_3.log 3
101588 142065 11:02:01
NO NO
1
/oracle/dbs/hq_sat_4.log 4
142065 142307 11:02:09
NO NO
1
/oracle/dbs/hq_sat_5.log 5
142307 142739 11:02:47 YES
YES 1
/oracle/dbs/hq_sat_6.log 6
142739 143973 12:02:09
NO NO
1
/oracle/dbs/hq_sat_7.log 7
143973 144042 01:02:00
NO NO
1
/oracle/dbs/hq_sat_8.log
8 144042 144051 01:02:00 NO NO 1
/oracle/dbs/hq_sat_9.log 9
144051 144054 01:02:15
NO NO
1
/oracle/dbs/hq_sat_10.log
10 144054 144057 01:02:20 NO NO 1
/oracle/dbs/hq_sat_11.log
11 144057 144060 01:02:25 NO NO 1
/oracle/dbs/hq_sat_13.log
13 144089 144147 01:02:40 NO NO 1
Notice
that for log file 11, the SCN of 144059 (recorded in Step 2) is between
the FIRST_CHANGE# column value of 144057 and
the NEXT_CHANGE# column value of 144060. This indicates log file 11
is currently being applied. Also, notice the gap in the sequence numbers in
the SEQ# column; in the example, the gap indicates that SAT database
is missing archived redo log file 12.
Issue
a SQL statement such as the following:
SQL> CONNECT
SYS/CHANGE_ON_INSTALL AS SYSDBA;
SQL> SELECT
APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;
APPLIED_SCN LATEST_SCN
----------- ----------
143970
144146
Step 6 Obtain a list of the log
files that were processed or are currently pending processing on the NYC
database.
Issue
a SQL statement such as the following:
SQL> SELECT
SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) "SEQ#",
2> FIRST_CHANGE#, NEXT_CHANGE#,
TO_CHAR(TIMESTAMP, 'HH:MI:SS')
TIMESTAMP,
3> DICT_BEGIN BEG, DICT_END END,
SUBSTR(THREAD#,1,4) "THR#"
4> FROM DBA_LOGSTDBY_LOG ORDER BY
SEQUENCE#;
FILE_NAME SEQ# FIRST_CHANGE#
NEXT_CHANGE# TIMESTAM BEG END THR#
-------------------------
---- ------------- ------------ -------- --- --- ----
/oracle/dbs/hq_nyc_2.log 2
101579 101588 11:02:58
NO NO
1
/oracle/dbs/hq_nyc_3.log 3
101588 142065 11:02:02
NO NO
1
/oracle/dbs/hq_nyc_4.log 4
142065 142307 11:02:10
NO NO
1
/oracle/dbs/hq_nyc_5.log 5
142307 142739 11:02:48 YES
YES 1
/oracle/dbs/hq_nyc_6.log 6
142739 143973 12:02:10
NO NO
1
/oracle/dbs/hq_nyc_7.log 7
143973 144042 01:02:11
NO NO
1
/oracle/dbs/hq_nyc_8.log 8
144042 144051 01:02:01
NO NO
1
/oracle/dbs/hq_nyc_9.log
9 144051 144054 01:02:16 NO NO 1
/oracle/dbs/hq_nyc_10.log
10 144054 144057 01:02:21 NO NO 1
/oracle/dbs/hq_nyc_11.log
11 144057 144060 01:02:26 NO NO 1
/oracle/dbs/hq_nyc_12.log
12 144060 144089 01:02:30 NO NO 1
/oracle/dbs/hq_nyc_13.log
13 144089 144147 01:02:41 NO NO 1
Notice
that for log file 6, the SCN of 143970 (recorded in Step 5) is between
the FIRST_CHANGE# column value of 142739 and
the NEXT_CHANGE# column value of 143973. This indicates that log file
6 is currently being applied. Also, notice that there are no gaps in the
sequence of log files that remain to be processed.
In
most cases, the logical standby database you choose as a failover target should
provide a balance between risk of data loss and time required to perform the
role transition. As you analyze this information to make a decision about the
best failover candidate in this scenario, consider the following:
- For minimal
risk of data loss during a failover, you should choose the NYC database as
the best target standby database because Steps 5 and 6revealed that the NYC site has the most recoverable
archived redo log files.
- For minimal
primary database downtime during the failover, you should choose the SAT
database as the best target standby database. This database is a more
appropriate candidate because the queries in Steps 2 through 6 reveal that the SAT database applied 5 archived
redo log files more than the NYC database (even though there was only a
1-second delay (lag) in the receipt of archived redo log files by the NYC
database). However, if it is not possible to obtain and apply a copy of
the missing archived redo log file (log file 12 in the example), then you
will not be able to make the SAT database as current as you can the NYC
database. Therefore, you will lose the unrecovered data (log files 12, 13,
and part of log file 14 in the example).
Based
on your business requirements, choose the best target standby database.
If you chose the SAT database as the best target based on your
business requirements, perform the following steps:
- Manually retrieve any missing archived redo log files using
an operating system utility. (This example uses the
UNIX cp command.) In this case, the SAT database is missing
archived redo log file 12. Because the NYC database received this archived
redo log file, you can copy it from the NYC database to the SAT database,
as follows:
2.
%cp
/net/nyc/oracle/dbs/hq_nyc_12.log
3.
/net/sat/oracle/dbs/hq_sat_12.log
- Determine if a partial archived
redo log file exists for the next sequence number. In this example, the
next sequence number should be 14. The following UNIX command shows the
directory on the SAT database, looking for the presence of an archived
redo log file named hq_sat_14.log:
5.
%ls -l
/net/sat/oracle/dbs/hq_sat_14.log
6.
-rw-rw---- 1 oracle
dbs 333280 Feb 12 1:03 hq_sat_14.log
- Stop log apply services and
register both the retrieved archived redo log file and the partial
archived redo log file:
10.
SQL> ALTER DATABASE
REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_sat_14.log';
- Start log apply services to
apply the most current log file:
12.
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY;
- Determine the highest applied
SCN on the SAT database by querying the V$LOGSTDBY_PROGRESS view
to see if the value of the APPLIED_SCN column is equal to the
value of the LATEST_SCN column:
14.
SQL> SELECT
APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;
15.
16.
APPLIED_SCN LATEST_SCN
17.
----------- ----------
18.
144205
144205
Because the SCN values match, you can be assured that there is no
longer a delay (lag) between the current log file on the primary database and
the last log file applied to the SAT database.
If you chose the NYC database as the best target based on your
business requirements, perform the following steps:
- Determine if a partial archived
redo log file exists for the next sequence number. In this example, the
next sequence number should be 14. The following UNIX command shows the
directory on the NYC database, looking for the presence of an archived
redo log file named hq_nyc_14:
2.
%ls -l
/net/nyc/oracle/dbs/hq_nyc_14.log
3.
-rw-rw---- 1 oracle
dbs 333330 Feb 12 1:03 hq_nyc_14.log
5.
SQL> ALTER DATABASE
STOP LOGICAL STANDBY APPLY;
6.
SQL> ALTER DATABASE
REGISTER LOGICAL LOGFILE '/oracle/dbs/hq_nyc_14.log';
- Start log apply services to
apply the most current log file:
8.
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY;
- Determine
the highest applied SCN on the NYC database by querying
the V$LOGSTDBY_PROGRESS view to see if the value of
the APPLIED_SCN column is equal to the value of
the LATEST_SCN column:
10.
SQL> SELECT
APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;
11.
12.
APPLIED_SCN LATEST_SCN
13.
----------- ----------
14.
144205
144205
Because the SCN values match, you can be sure there is no longer a
delay (lag) between the current log file on the primary database and the last
log file received and applied by the NYC database.
You
are now ready to stop log apply services and fail over the selected logical
standby database to the primary role.
See Section 7.3.2 for additional information on how to
perform the failover.
This
section presents the steps required on a logical standby database after the
primary database has failed over to another standby database. After a failover
has occurred, a logical standby database cannot act as a standby database for
the new primary database until it has applied the final redo from the original
primary database. This is similar to the way the new primary database applied
the final redo during the failover. The steps you must perform depend on
whether the new primary database was a physical standby or a logical standby
database prior to the failover:
- Section 12.3.1, "When the New
Primary Database Was Formerly a Physical Standby Database"
- Section 12.3.2, "When the New
Primary Database Was Formerly a Logical Standby Database"
This
scenario demonstrates how to configure the SAT logical standby database with
the failed over NYC database. This scenario is a continuation of the example
described in Section 12.2.2. However, in this example, the NYC database was
formerly a physical standby database.
Perform
the following steps to configure the logical standby database with the new
primary database:
On
the NYC database, issue the following statements
(assuming LOG_ARCHIVE_DEST_4 is configured to archive to the SAT
database):
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_4=DEFER;
SQL> ALTER SYSTEM
ARCHIVE LOG CURRENT;
Step 2 Verify the logical
standby database is capable of serving as a standby database to the new primary
database.
On
the SAT database, issue the following statement:
SQL> EXECUTE
DBMS_LOGSTDBY.PREPARE_FOR_NEW_PRIMARY(-
former_standby_type => 'PHYSICAL' -
dblink => 'nyc_link');
Note:
If
the ORA-16109 message is returned and the 'LOGSTDBY:
prepare_for_new_primary failure -- applied too far, flashback required.'
warning is written in the alert.log, perform the following steps:
1. Flash back the database to the SCN as stated in
the warning and then
2. Repeat this step before continuing.
See Section 12.4.3 for an example of how to flash back a
logical standby database to an Apply SCN.
On
the NYC database, issue the following statements
(assume LOG_ARCHIVE_DEST_4 is configured to archive to the SAT
database):
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_4=ENABLE;
SQL> ALTER SYSTEM
ARCHIVE LOG CURRENT;
Step 4 Query the new primary
database to determine the SCN at which real-time apply can be enabled on the
logical standby database
On
the NYC database, issue the following query to determine the SCN of interest:
SQL> SELECT MAX(NEXT_CHANGE#)
-1 AS WAIT_FOR_SCN FROM V$ARCHIVED_LOG;
On
the SAT database, issue the following statement:
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY;
Note
that you must always issue this statement without the real-time apply option.
You need to wait for SQL Apply to apply past WAIT_FOR_SCN returned in
Step 4, before you can enable real-time apply. To determine when it is
safe to resume real-time apply on the logical standby database, monitor
theV$LOGSTDBY_PROGRESS view:
SQL> SELECT
APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
When
the value returned is greater than or equal to the WAIT_FOR_SCN value
returned in Step 4, you can stop SQL Apply and restart it with real-time apply
option:
SQL> ALTER DATABASE
STOP LOGICAL STANDBY APPLY;
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY IMMEDIATE;
This
scenario demonstrates how to configure the SAT logical standby database with
the failed over NYC database. This scenario is a continuation of the example
described in Section 12.2.2. However, in this example, the NYC database was
formerly a logical standby database.
Perform
the following steps to configure the logical standby database with the new
primary database:
On
the NYC database, ensure the following query returns a value of READY.
Otherwise, the LSP1 background process has not completed its work and the
configuration of this logical must wait. For example:
SQL> SELECT VALUE
FROM DBA_LOGSTDBY_PARAMETERS WHERE
2> NAME = 'REINSTATEMENT_STATUS';
VALUE
--------------------
READY
Note:
If the VALUE column contains NOT
POSSIBLE it means that no logical standby database may be configured with
the new primary database, and you must reinstate the database.
On
the NYC database, issue the following statements
(assume LOG_ARCHIVE_DEST_4 is configured to archive to the SAT
database):
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_4=DEFER;
SQL> ALTER SYSTEM
ARCHIVE LOG CURRENT;
On
the SAT database, issue the following statement:
SQL> EXECUTE
DBMS_LOGSTDBY.PREPARE_FOR_NEW_PRIMARY(-
former_standby_type => 'LOGICAL' -
dblink => 'nyc_link');
Note:
If
the ORA-16109 message is returned and the 'LOGSTDBY:
prepare_for_new_primary failure -- applied too far, flashback required.'
warning is written in the alert.log file, perform the following steps:
1. Flash back the database to the SCN as stated in
the warning and then
2. Repeat this step before continuing.
See Section 12.4.3 for an example of how to flash back a
logical standby database to an Apply SCN.
On
the SAT database, look for the output from
the DBMS_LOGSTDBY.PREPARE_FOR_NEW_PRIMARY procedure that identifies
the log files that must be copied to the local system. If Step 3 identified the failover as a no-data-loss failover,
then the displayed log files must be copied from the new primary database and
should not be obtained from other logical standby databases or the former
primary database. For example, on a Linux system, you would enter
the grepcommand:
%grep 'LOGSTDBY:
Terminal log' alert_sat.log
LOGSTDBY: Terminal log:
[/oracle/dbs/hq_nyc_13.log]
Note:
If the prior step was executed multiple times,
the output from the most recent attempt is the only relevant output. File paths
are relative to the new primary database and may not be resolvable on the local
file system.
On
the SAT database, copy the terminal log files to the local system. The
following example shows how to do this using Linux commands:
%cp
/net/nyc/oracle/dbs/hq_nyc_13.log
/net/sat/oracle/dbs/hq_sat_13.log
On
the SAT database, issue the following statement:
SQL> ALTER DATABASE
REGISTER OR REPLACE LOGICAL LOGFILE -
'/net/sat/oracle/dbs/hq_sat_13.log';
On
the SAT database, issue the following statements:
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY NEW PRIMARY nyc_link;
Note
that you must always issue this statement without the real-time apply option.
If you want to enable real-time apply on the logical standby database, wait for
the above statement to complete successfully, and then issue the following
statements:
SQL> ALTER DATABASE
STOP LOGICAL STANDBY APPLY;
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY IMMEDIATE;
On
the NYC database, issue the following statements
(assuming LOG_ARCHIVE_DEST_4 is configured to archive to the SAT
database):
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_4=ENABLE;
SQL> ALTER SYSTEM
ARCHIVE LOG CURRENT;
After
a failover occurs, the original primary database can no longer participate in
the Data Guard configuration until it is repaired and established as a standby
database in the new configuration. To do this, you can use the Flashback
Database feature to recover the failed primary database to a point in time
before the failover occurred, and then convert it into a physical or logical
standby database in the new configuration. The following sections describe:
- Flashing Back a Failed Primary Database
into a Physical Standby Database
- Flashing Back a Failed Primary Database
into a Logical Standby Database
Note:
You must have already enabled Flashback Database on the original
primary database before the failover. See Oracle Database Backup and Recovery Basics for more information.
See Also:
Oracle Data Guard Broker for automatic reinstatement of the failed
primary database as a new standby database (as an alternative to using
Flashback Database)
The
following steps assume the user has already performed a failover involving a
physical standby database and Flashback Database has been enabled on the old
primary database. This procedure brings the old primary database back into the
Data Guard configuration as a new physical standby database.
On
the new primary database, issue the following query to determine the SCN at
which the old standby database became the new primary database:
SQL> SELECT
TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;
To
create a new physical standby database, shut down the old primary database (if
necessary), mount it, and flash it back to the value
forSTANDBY_BECAME_PRIMARY_SCN that was determined in Step 1:
SQL> SHUTDOWN
IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK
DATABASE TO SCN standby_became_primary_scn;
Perform
the following steps on the old primary database:
- Issue the following statement
on the old primary database:
2.
SQL> ALTER DATABASE
CONVERT TO PHYSICAL STANDBY;
This statement will dismount the database after successfully
converting the control file to a standby control file.
- Shut down and restart the
database:
4.
SQL> SHUTDOWN
IMMEDIATE;
5.
SQL> STARTUP MOUNT;
Before
the new standby database was created, the new primary database probably stopped
transmitting redo to the remote destination. To restart redo transport
services, perform the following steps on the new primary database:
- Issue the following query to
see the current state of the archive destinations:
2.
SQL> SELECT DEST_ID,
DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, ERROR,SRL
3.
2> FROM V$ARCHIVE_DEST_STATUS;
- If necessary, enable the
destination:
5.
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_n=ENABLE;
- Perform a log switch to ensure
the standby database begins receiving redo data from the new primary
database, and verify it was sent successfully. At the SQL prompt, enter
the following statements:
7.
SQL> ALTER SYSTEM
SWITCH LOGFILE;
8.
SQL> SELECT DEST_ID,
DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, ERROR,SRL
9.
2> FROM V$ARCHIVE_DEST_STATUS;
On the new standby database, you may also need to change
the LOG_ARCHIVE_DEST_n initialization parameters so that redo
transport services do not transmit redo data to other databases. This step can
be skipped if both the primary and standby database roles were set up with
the VALID_FORattribute in one server parameter file (SPFILE). By doing
this, the Data Guard configuration operates properly after a role transition.
Start
Redo Apply or real-time apply on the new physical standby database:
- To start Redo Apply:
·
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT;
- To start real-time apply:
·
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE
·
2> USING CURRENT LOGFILE DISCONNECT;
Once
the failed primary database is restored and is running in the standby role, you
can optionally perform a switchover to transition the databases to their
original (pre-failure) roles. See Section 7.2.1, "Switchovers Involving a
Physical Standby Database" for more information.
The
following steps assume that the Data Guard configuration has already completed
a failover involving a logical standby database and Flashback Database has been
enabled on the old primary database. This procedure brings the old primary
database back into the Data Guard configuration as a new logical standby
database, without having to formally reinstantiate it from the new primary
database.
On
the new primary database, issue the following query to determine the SCN to
which you want to flash back the failed primary database:
SQL> SELECT APPLIED_SCN
AS FLASHBACK_SCN FROM V$LOGSTDBY_PROGRESS;
Step 2 Determine the log files
that must be copied to the failed primary database for Flashback Database.
On
the new primary database, issue the following query to determine the log files
that must be copied to the failed primary database for Flashback Database to
reach a consistent state
SQL> SELECT NAME FROM
DBA_LOGSDTBY_LOG
2> WHERE NEXT_CHANGE# >
3> (SELECT VALUE FROM
DBA_LOGSTDBY_PARAMETERS
4> WHERE NAME = 'STANDBY_BECAME_PRIMARY_SCN')
5> AND FIRST_CHANGE <=
(FLASHBACK_SCN from step 1);
To
create a new logical standby database, shut down the database (if necessary),
mount the failed primary database, flash it back to the FLASHBACK_SCNdetermined
in step 1, and enable the database guard.
SQL> SHUTDOWN
IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK
DATABASE TO SCN became_primary_scn;
SQL> ALTER DATABASE
GUARD ALL;
SQL> ALTER DATABASE
OPEN RESETLOGS;
SQL> CREATE PUBLIC
DATABASE LINK mylink
2> CONNECT TO system
IDENTIFIED BY password
3> USING 'service_name_of_new_primary_database';
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY NEW PRIMARY mylink;
The
role reversal is now complete.
Once
the failed primary database has been restored and is running in the standby
role, you can optionally perform a switchover to transition the databases to
their original (pre-failure) roles. See Section 7.3.1, "Switchovers Involving a
Logical Standby Database" for more information.
One
of the benefits of a standby database is that Flashback Database can be
performed on the standby database without affecting the primary database
service. Flashing back a database to a specific point in time is a
straightforward task, however on a logical standby database, you may want to
flash back to a time just before a known transaction was committed. Such a need
can arise when configuring a logical standby database with a new primary
database after a failover.
The
following steps describe how to use Flashback Database and SQL Apply to recover
to a known applied SCN.
On
the logical standby database, issue the following query to identify the log
files that contain the Apply_SCN:
SQL> SELECT FILE_NAME
FROM DBA_LOGSTDBY_LOG
5> WHERE FIRST_CHANGE# <= APPLY_SCN
6> AND NEXT_CHANGE# > APPLY_SCN
7> ORDER BY FIRST_CHANGE# ASCENDING;
FILE_NAME
----------------------------------------------------------------
/net/sat/oracle/dbs/hq_sat_13.log
Locate
the timestamp in the alert.log file associated with the SQL Apply
initial reading of the first log file displayed in Step 1. For example:
%grep -B 1 '^LOGMINER:
Begin mining logfile' alert_gap2.log | grep -B 1 hq_sat_13.logTue Jun 7 02:38:18 2005LOGMINER: Begin mining
logfile: /net/sat/oracle/dbs/hq_sat_13.log
Flash
back the database to the timestamp identified in Step 2.
SQL> SHUTDOWN;SQL>
STARTUP MOUNT EXCLUSIVE;SQL> FLASHBACK DATABASE TO TIMESTAMP -
TO_TIMESTAMP('07-Jun-05 02:38:18',
'DD-Mon-RR HH24:MI:SS');SQL> ALTER DATABASE OPEN RESETLOGS;
Issue
the following query:
SQL> SELECT
APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
Suppose
an error has occurred on the primary database in a Data Guard configuration in
which the standby database is using real-time apply. In this situation, the
same error will be applied on the standby database.
However,
if Flashback Database is enabled, you can revert the primary and standby
databases back to their pre-error condition by issuing the FLASHBACK DATABASE and OPEN
RESETLOGS statements on the primary database, and then issuing a
similar FLASHBACK STANDBY DATABASE statement on the standby database
before restarting log apply services. (If Flashback Database is not enabled,
you need to re-create the standby database, as described in Chapter 3and Chapter 4, after the point-in-time recovery was performed on the primary
database.)
The
following steps describe how to avoid re-creating a physical standby database
after you issued the OPEN RESETLOGS statement on the primary
database.
On the primary database, use the following query
to obtain the value of the system change number (SCN) that is 2 SCNs before
the RESETLOGS operation occurred on the primary database:
SQL> SELECT
TO_CHAR(RESETLOGS_CHANGE# - 2) FROM V$DATABASE;
SQL> SELECT
TO_CHAR(CURRENT_SCN) FROM V$DATABASE;
If
the value of CURRENT_SCN is larger than the value of
resetlogs_change# - 2, issue the following statement to flash back the standby
database.
SQL> FLASHBACK STANDBY
DATABASE TO SCN resetlogs_change# -2;
- If the value
of CURRENT_SCN is less than the value of the resetlogs_change# -
2, skip to Step 4.
- If the standby database's SCN
is far enough behind the primary database's SCN, log apply services will
be able to continue through the OPEN RESETLOGS statement without
stopping. In this case, flashing back the database is unnecessary because
log apply services do not stop upon reaching the OPEN
RESETLOGS statement in the redo data.
To
start Redo Apply on the physical standby database, issue the following
statement:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT;
The
standby database is now ready to receive and apply redo from the primary
database.
The
following steps describe how to avoid re-creating a logical standby database
after you have flashed back the primary database and opened it by issuingOPEN
RESETLOGS statement.
On
the primary database, use the following query to obtain the value of the system
change number (SCN) that is 2 SCNs before the RESETLOGS operation
occurred on the primary database:
SQL> SELECT
TO_CHAR(RESETLOGS_CHANGE# - 2) AS FLASHBACK_SCN FROM V$DATABASE;
On
the logical standby database, stop SQL Apply:
SQL> ALTER DATABASE
STOP LOGICAL STANDBY APPLY;
SQL> SELECT
APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
If
the APPLIED_SCN is less than the value of the resetlogs_change#-2,
you do not need to flash back the standby database and can proceed to
Step 6. This may happen if SQL Apply is running with a delay. Otherwise,
continue with Step 5.
On
the logical standby database, determine which archived redo log file contains
the FLASHBACK_SCN determined in Step 1
SQL> SELECT FILE_NAME
FROM DBA_LOGSTDBY_LOG
2> WHERE FIRST_CHANGE# <= FLASHBACK_SCN
3> AND NEXT_CHANGE# > FLASHBACK_SCN
4> ORDER BY FIRST_CHANGE# ASCENDING;
FILE_NAME
----------------------------------------------------------------
/net/sat/oracle/dbs/hq_sat_146.log
Locate
the timestamp in the alert.log file associated with the SQL Apply
initial reading of the first log file displayed in Step 1. For example:
%grep -B 1 '^LOGMINER:
Begin mining logfile' alert.log | grep
-B 1 hq_sat_146.log
Tue Mar 7 12:38:18 2005
LOGMINER: Begin mining
logfile: /net/sat/oracle/dbs/hq_sat_146.log
Issue
the following SQL statements to flash back the logical standby database to the
time identified in step 4, and open the logical standby database with
theRESETLOGS option:
SQL> SHUTDOWN;
SQL> STARTUP MOUNT
EXCLUSIVE;
SQL> FLASHBACK
DATABASE TO TIMESTAMP('07-Mar-05 12:38:18', 'DD-Mon-RR HH24:MI:SS');
SQL> ALTER DATABASE
OPEN RESETLOGS;
On
the logical standby database, issue the following query:
SQL> SELECT
APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY IMMEDIATE;
Using
a combination of Data Guard, restore points, and Flashback Database, a physical
standby database can be opened temporarily in read/write mode for development,
reporting, or testing purposes, and then flashed back to a point in the past to
be reverted back to a physical standby database. When the database is flashed
back, Data Guard automatically synchronizes the standby database with the
primary database, without the need to re-create the physical standby database
from a backup copy of the primary database.
Figure 12-7 shows a physical standby database being activated as a
read/write clone database, resynchronized with the primary database, and
eventually flashed back and reverted to its physical standby database role. You
can repeat this cycle of activate, flashback and revert as many times as is
necessary.
Caution:
While the database is activated, it is not
receiving redo data from the primary database and cannot provide disaster
protection. It is recommended that there be at least two physical standby
databases participating in the configuration so that the primary database
remains protected against data loss.
Perform
the following steps to activate the physical standby database as a production
database and later resynchronize it with the primary database.
- Set up a flash recovery area.
On the physical standby database that will be activated for
read/write access, you should set the following initialization parameters to
ensure aguaranteed restore point
can be created. This scenario sets up the flash recovery area in the /arch/oradata location:
- Cancel Redo Apply and create a
guaranteed restore point.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
When you create a guaranteed
restore point, you associate an easy-to-remember name with a timestamp or SCN
so that you can later flash back the database to a name instead of specifying
an exact SCN or time.
- Archive the current log file.
On the primary database, switch logs so the SCN of the restore
point (created in step 1) will be archived on the physical standby database:
SQL> ALTER SYSTEM
ARCHIVE LOG CURRENT;
When using standby redo log files, this step is essential to
ensure the database can be properly flashed back to the restore point.
- Defer log archive destinations
pointing to the standby that will be activated.
On the primary database (on all instances if this is a Real Applications
Cluster), defer the archival of redo data to the destination associated with
the physical standby database that will be opened. For example:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_2=DEFER;
On
the physical standby database, perform the following steps:
- Activate the physical standby
database:
- If the physical standby
database has been opened read-only since the instance was started, perform
this step. Otherwise, skip to step 3.
Enter the following statement to shut down and restart the
physical standby database:
SQL> STARTUP MOUNT
FORCE;
- Set the protection mode to
maximum performance and open the database for read/write access:
5.
SQL> ALTER DATABASE
SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
6.
SQL> ALTER DATABASE
OPEN;
After
the standby database is activated, its protection mode is downgraded to the
maximum performance mode, because there is no standby database configured to
protect the database against data loss while it is temporarily activated as a
production database. Note that this protection mode setting does not affect the
protection mode of the original primary database, it affects only the activated
standby database.
When
the activated standby database is converted back to a physical standby
database, its protection mode is automatically changed to match that of the
original primary database.
If
the standby database that was opened read/write temporarily has remote archive
log destinations, you might need to disable them. In this way, the read/write
testing or reporting database will not propagate its temporary changes to other
standby databases in the original Data Guard environment.
Once
the standby database has been activated, you can run reporting tools or perform
other testing and activities for days or even weeks, independent of the primary
database.
Caution:
While the database is activated, it is not
receiving redo data from the primary database and cannot provide disaster
protection. It is recommended that there be at least two physical standby
databases participating in the configuration so that the primary database
remains protected against data loss.
Also,
any results stored in the activated database will be lost when you later flash
back the database. Results that should be saved must be copied out of the
activated database before flashing it back.
After
you finish testing, you need to resynchronize the activated database with the
primary database. Issue the following statements on the activated database to
quickly flash it back to the guaranteed restore point and resynchronize it with
the primary database:
SQL> STARTUP MOUNT
FORCE;
SQL> FLASHBACK
DATABASE TO RESTORE POINT before_application_patch;
SQL> ALTER DATABASE
CONVERT TO PHYSICAL STANDBY;
SQL> STARTUP MOUNT
FORCE;
The
method you use will depend on how far the activated standby database lags
behind the primary database in its application of redo data:
- Let archive gap resolution
fetch all missing archived redo log files and allow Redo Apply to apply
the gap.
If the activated database has not fallen too far behind the
original primary database, issue the following statement on the standby
database to resynchronize it with the primary database and restart Redo Apply.
For example:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT;
Then, go to Step 7.
- Create an incremental backup on
the primary and apply it to the standby.
If the activated database has fallen too far behind the original
primary database (for example, if there are not sufficient log files
available), you can take an incremental backup from the primary database and
apply it to the standby database. See Section 12.7.1 for information about using RMAN
incremental backups to resynchronize the standby database with the primary
database.
Note:
If the standby database lags very far behind the primary database,
it may be quicker to apply an incremental backup created from the primary
database using the steps in Section 12.7.1.
After you apply an incremental backup to the standby database, you
typically need to apply more redo to the standby database to activate the
physical standby database again for read/write testing or reporting purposes.
More specifically, you might need to apply the redo generated by the primary
database while the incremental backup was taken. Otherwise, issuing an ALTER DATABSE ACTIVATE STANDBY DATABASE will return an
error.
On
the primary database, issue the following statement to reenable archiving to
the physical standby database:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_2=ENABLE;
In
some situations, RMAN incremental
backups can be used to synchronize a physical standby database with the primary
database. Using the RMAN BACKUP INCREMENTAL FROM SCN command, you can
create a backup on the primary database that starts at the standby database's
current SCN, which can then be used to roll the standby database forward in
time.
The
following sections describe situations in which RMAN incremental backups may be
useful:
- Physical Standby Database Lags Far Behind
the Primary Database
- Physical Standby Database Has Nologging
Changes On a Subset of Datafiles
- Physical Standby Database Has Widespread
Nologging Changes
See Also:
Oracle Database Backup and Recovery Advanced
User's Guide for more
information about RMAN incremental backups
In
cases where a physical standby database is far behind the primary database, an
RMAN incremental backup can be used to roll the standby database forward faster
than redo log apply. In this procedure, the RMAN BACKUP INCREMENTAL FROM
SCN command is used to create an incremental backup on the primary
database that starts at the current SCN of the standby and is used to roll
forward the standby database.
Note:
The steps in this section can also be used to
resolve problems if a physical standby database has lost or corrupted archived
redo data or has an unresolveable archive gap.
2.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
- On the standby database, find
the SCN which will be used for the incremental backup at the primary
database:
4.
SQL> SELECT CURRENT_SCN
FROM V$DATABASE;
- In RMAN, connect to the primary
database and create an incremental backup from the SCN derived in the
previous step:
6.
RMAN> BACKUP
INCREMENTAL FROM SCN <SCN from previous step>
7.
DATABASE FORMAT
'/tmp/ForStandby_%U' tag 'FORSTANDBY';
Note:
RMAN does not consider the incremental backup as part of a backup
strategy at the source database. Hence:
o
The backup is not
suitable for use in a normal RECOVER DATABASE operation at the source
database
o
The backup is not
cataloged at the source database
o
The backup sets produced
by this command are written to the /dbs location by default, even if
the flash recovery area or some other backup destination is defined as the
default for disk backups.
o
You must create this
incremental backup on disk for it to be useful. When you move the incremental
backup to the standby database, you must catalog it at the standby as described
in Oracle Database Backup and Recovery Advanced
User's Guide. Backups on tape cannot
be cataloged.
- Transfer all backup sets
created on the primary system to the standby system (note that there may
be more than one backup file created). For example:
9.
SCP /tmp/ForStandby_*
standby:/tmp
- Connect to the standby database
as the RMAN target, and catalog all incremental backup pieces:
11.
RMAN> CATALOG START
WITH '/tmp/ForStandby';
- Recover the standby database
with the cataloged incremental backup:
13.
RMAN> RECOVER
DATABASE NOREDO;
- In RMAN, connect to the primary
database and create a standby control file backup:
15.
RMAN> BACKUP CURRENT
CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';
- Copy the standby control file
backup to the standby system. For example:
17.
SCP
/tmp/ForStandbyCTRL.bck standby:/tmp
- Shut down the standby database
and startup nomount:
19.
RMAN> SHUTDOWN;
20.
RMAN> STARTUP
NOMOUNT;
- In RMAN, connect to standby
database and restore the standby control file:
22.
RMAN> RESTORE STANDBY
CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';
- Shut down the standby database
and startup mount:
24.
RMAN> SHUTDOWN;
25.
RMAN> STARTUP MOUNT;
- If the primary and standby
database data file directories are identical, skip to step 13. If the
primary and standby database data file directories are different, then in
RMAN, connect to the standby database, catalog the standby data files, and
switch the standby database to use the just-cataloged data files. For
example:
27.
RMAN> CATALOG START
WITH '+DATA_1/CHICAGO/DATAFILE/';
28.
RMAN> SWITCH DATABASE
TO COPY;
- If the primary and standby
database redo log directories are identical, skip to step 14. Otherwise,
on the standby database, use an OS utility or the asmcmd utility (if it is
an ASM-managed database) to remove all online and standby redo logs from
the standby directories and ensure that theLOG_FILE_NAME_CONVERT parameter
is properly defined to translate log directory paths. For
example,LOG_FILE_NAME_CONVERT='/BOSTON/','/CHICAGO/'.
- On the standby database, clear
all standby redo log groups (there may be more than 3):
31.
SQL> ALTER DATABASE
CLEAR LOGFILE GROUP 1;
32.
SQL> ALTER DATABASE
CLEAR LOGFILE GROUP 2;
33.
SQL> ALTER DATABASE
CLEAR LOGFILE GROUP 3;
- On the standby database,
restart Flashback Database:
35.
SQL> ALTER DATABASE
FLASHBACK OFF;
36.
SQL> ALTER DATABASE
FLASHBACK ON;
- On the standby database,
restart MRP:
38.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT;
Follow this step-by-step procedure to
roll forward a physical standby database for which nologging changes have been
applied to a small subset of the database:
- List
the files that have had nologging changes applied by querying
the V$DATAFILE view on the standby database. For example:
2.
SQL> SELECT FILE#,
FIRST_NONLOGGED_SCN FROM V$DATAFILE
3.
2> WHERE FIRST_NONLOGGED_SCN > 0;
4.
5.
FILE# FIRST_NONLOGGED_SCN
6.
----------
-------------------
7.
4 225979
8.
5 230184
- Stop Redo Apply on the standby
database:
10.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
- On the standby database,
offline the datafiles (recorded in step 0) that have had nologging changes. Taking these
datafiles offline ensures redo data is not skipped for the corrupt blocks
while the incremental backups are performed.
12.
SQL> ALTER DATABASE
DATAFILE 4 OFFLINE FOR DROP;
13.
SQL> ALTER DATABASE
DATAFILE 5 OFFLINE FOR DROP;
- Start Redo Apply on the standby
database:
15.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE
16.
2> USING CURRENT LOGFILE DISCONNECT;
- While connected to the primary
database as the RMAN target, create an incremental backup for each
datafile listed in the FIRST_NONLOGGED_SCNcolumn (recorded in
step 0). For example:
18.
RMAN> BACKUP
INCREMENTAL FROM SCN 225979 DATAFILE 4 FORMAT '/tmp/ForStandby_%U' TAG 'FOR
STANDBY';
19.
RMAN> BACKUP
INCREMENTAL FROM SCN 230184 DATAFILE 5 FORMAT '/tmp/ForStandby_%U' TAG 'FOR
STANDBY';
- Transfer all backup sets
created on the primary system to the standby system. (Note that there may
be more than one backup file created.)
21.
SCP /tmp/ForStandby_*
standby:/tmp
- While connected to the physical
standby database as the RMAN target, catalog all incremental backup
pieces. For example:
23.
RMAN> CATALOG START
WITH '/tmp/ForStandby_';
- Stop Redo Apply on the standby
database:
25.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
- Online the datafiles on the
standby database
27.
SQL> ALTER DATABASE
DATAFILE 4 ONLINE;
28.
SQL> ALTER DATABASE
DATAFILE 5 ONLINE;
- While connected to the physical
standby database as the RMAN target, apply the incremental backup sets:
30.
RMAN> RECOVER
DATAFILE 4, 5 NOREDO;
- Query
the V$DATAFILE view on the standby database to verify there are
no datafiles with nologged changes. The following query should return zero
rows
32.
SQL> SELECT FILE#,
FIRST_NONLOGGED_SCN FROM V$DATAFILE
33.
2> WHERE FIRST_NONLOGGED_SCN > 0;
- Remove the incremental backups
from the standby system:
35.
RMAN> DELETE BACKUP
TAG 'FOR STANDBY';
- Manually remove the incremental
backups from the primary system. For example, the following example uses
the Linux rm command:
37.
rm /tmp/ForStandby_*
- Start Redo Apply on the standby
database:
39.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Follow this step-by-step procedure to
roll forward a physical standby database for which nologging changes have been
applied to a large portion of the database:
2.
SQL> SELECT
MIN(FIRST_NONLOGGED_SCN) FROM V$DATAFILE
3.
2> WHERE FIRST_NONLOGGED_SCN>0;
4.
5.
MIN(FIRST_NONLOGGED_SCN)
6.
------------------------
7.
223948
- Stop Redo Apply on the standby
database:
9.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
- While connected to the primary
database as the RMAN target, create an incremental backup from the
lowest FIRST_NONLOGGED_SCN (recorded in step0)
11.
RMAN> BACKUP
INCREMENTAL FROM SCN 223948 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR
STANDBY';
- Transfer all backup sets
created on the primary system to the standby system. (Note that more than
one backup file may have been created.) The following example uses
the scp command to copy the files:
13.
scp /tmp/ForStandby_*
standby:/tmp
- While connected to the standby
database as the RMAN target, catalog all incremental backup piece(s)
15.
RMAN> CATALOG START
WITH '/tmp/ForStandby_';
- While connected to the standby
database as the RMAN target, apply the incremental backups:
17.
RMAN> RECOVER
DATABASE NOREDO;
- Query the V$DATAFILE view
to verify there are no datafiles with nologged changes. The following
query on the standby database should return zero rows:
19.
SQL> SELECT FILE#,
FIRST_NONLOGGED_SCN FROM V$DATAFILE
20.
2> WHERE FIRST_NONLOGGED_SCN > 0;
- Remove the incremental backups
from the standby system:
22.
RMAN> DELETE BACKUP
TAG 'FOR STANDBY';
- Manually remove the incremental
backups from the primary system. For example, the following removes the
backups using the Linux rm command:
24.
rm /tmp/ForStandby_*
- Start Redo Apply on the standby
database:
26.
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE
27.
2> USING CURRENT LOGFILE DISCONNECT;
By
default, when log apply services are running on the
standby database, the redo data is either written to archived log files and
applied, or when real-time apply is enabled, the redo is written to the standby
database as it arrives from the primary database. But in some cases, you may
want to create a time lag
between the archiving of an online redo log file at the primary site and the
application of the archived redo log file at the standby site. A time lag can
protect against the transfer of corrupted or erroneous data from the primary
site to the standby site. When you set a time delay, it does not delay the
transport of the redo data to the standby database. Instead, the time lag you
specify begins when the redo data is completely archived at the standby
destination.
For
example, suppose you run a batch job every night on the primary database.
Unfortunately, you accidently ran the batch job twice, and you did not realize
the mistake until the batch job completed for the second time. Ideally, you
need to roll back the database to the point in time before the batch job began.
A primary database that has a standby database with a time lag could help you
to recover. You could fail over the standby database with the time lag and use
it as the new primary database.
To
create a standby database with a time lag, use the DELAY attribute of
the LOG_ARCHIVE_DEST_n initialization parameter in the primary
database initialization parameter file.
Note:
If you define a delay for a destination that has
real-time apply enabled, the delay is ignored
Although
the redo data is still automatically transmitted from the primary database to
the standby database and written to archived redo log files (and standby redo
log files, if implemented), the log files are not immediately applied to the
standby database. The log files are applied when the specified time interval expires.
This
scenario uses a 4-hour time lag and covers the following topics:
- Establishing a Time Lag on a Physical
Standby Database
- Failing Over to a Physical Standby
Database with a Time Lag
- Switching Over to a Physical Standby
Database with a Time Lag
Readers
of this scenario are assumed to be familiar with the procedures for creating a
typical standby database. The details were omitted from the steps outlined in
this scenario. See Chapter 3 for details about creating physical standby databases.
To create a physical
standby database with a time lag, modify the LOG_ARCHIVE_DEST_n initialization
parameter on the primary database to set a delay for the standby database. The
following is an example of how to add a 4-hour delay to
the LOG_ARCHIVE_DEST_n initialization parameter:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_2='SERVICE=stdby DELAY=240';
The DELAY attribute
indicates that the archived redo log files at the standby site are not
available for recovery until the 4-hour time interval has expired. The time
interval (expressed in minutes) starts when the archived redo log files are
successfully archived at the standby site. The redo information is still sent
to the standby database and written to the disk as normal.
See Section 6.2.2 for a more information about establishing
a time lag on physical and logical standby databases.
A
standby database configured to delay application of archived redo log files can
be used to recover from user errors or data corruptions on the primary
database. In most cases, you can query the time-delayed standby database to
retrieve the data needed to repair the primary database (for example, to
recover the contents of a mistakenly dropped table). In cases where the damage
to the primary database is unknown or when the time required to repair the
primary database is prohibitive, you can also consider failing over to a
time-delayed standby database.
Assume
that a backup file was inadvertently applied twice to the primary database and
that the time required to repair the primary database is prohibitive. You
choose to fail over to a physical standby database for which the application of
archived redo log files is delayed. By doing so, you transition the standby
database to the primary role at a point before the problem occurred, but you
will likely incur some data loss. The following steps illustrate the process:
- Initiate the failover by
issuing the appropriate SQL statements on the time-delayed physical
standby database:
4.
SQL> SHUTDOWN
IMMEDIATE;
5.
SQL> STARTUP
The ACTIVATE statement immediately transitions the
standby database to the primary role and makes no attempt to apply any
additional redo data that might exist at the standby location. When using this
statement, you must carefully balance the cost of data loss at the standby
location against the potentially extended period of downtime required to fully
repair the primary database.
- Re-create all other standby
databases in the configuration from a copy of this new primary database.
All
of the redo data is transmitted to the standby site as it becomes available.
Therefore, even when a time delay is specified for a standby database, you can
make the standby database current by overriding the delay using the SQL ALTER DATABASE RECOVER MANAGED
STANDBY statement.
Note:
To recover from a logical error, you must
perform a failover instead of a switchover.
The
following steps demonstrate how to perform a switchover to a time-delayed
physical standby database that bypasses a time lag. For the purposes of this
example, assume that the primary database is located in New York, and the
standby database is located in Boston.
Step 1 Apply all of the archived redo log files to the original
(time-delayed) standby database bypassing the lag.
Switchover
will not begin until the standby database applies all of the archived redo log
files. By lifting the delay using the NODELAY keyword, you allow the
standby database to proceed without waiting for the specified time interval to
pass before applying the archived redo log files.
Issue
the following SQL statement to lift the delay:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE NODELAY
2> DISCONNECT FROM SESSION THROUGH LAST
SWITCHOVER;
You
must have exclusive database access before beginning a switchover. Ask users to
log off the primary and standby databases, or query the V$SESSIONview to
identify users that are connected to the databases and close all open sessions
except the SQL*Plus session from which you are going to execute the switchover
statement. See Oracle Database Administrator's Guide for more information about managing users.
On
the primary database (in New York), execute the following statement:
2> WITH SESSION SHUTDOWN;
This
statement does the following:
- Closes the primary database,
terminating any active sessions
- Transmits any unarchived redo
log files and applies them to the standby database (in Boston)
- Adds an end-of-redo marker to
the header of the last log file being archived
- Creates a backup of the current
control file
- Converts the current control
file into a standby control file
Execute
the following statement on the former primary database (in New York):
SQL> SHUTDOWN NORMAL;
SQL> STARTUP MOUNT;
Issue
the following SQL statement:
Issue
the following SQL statements:
SQL> SHUTDOWN;
The V$ARCHIVE_DEST view contains
the network error and identifies which standby database cannot be reached. On
the primary database, execute the following SQL statement for the destination
that experienced the network failure. For example:
SQL> SELECT DEST_ID,
STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID = 2;
DEST_ID STATUS
ERROR
---------- ---------
--------------------------------------------------------
2
ERROR ORA-12224: TNS:no
listener
The
query results show there are errors archiving to the standby database, and the
cause of the error is TNS:no listener. You should check
whether or not the listener on the standby site is started. If the listener is
stopped, then start it.
If you cannot solve the network problem quickly, and if the standby
database is specified as a mandatory destination, try to prevent the database
from stalling by doing one of the following:
- Defer archiving to the
mandatory destination:
·
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_2 = DEFER;
When the network problem is resolved, you can enable the archive
destination again:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_2 = ENABLE;
- Change the archive destination
from mandatory to optional:
·
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1
·
2> OPTIONAL REOPEN=60';
When the network problem is resolved, you can change the archive
destination from optional back to mandatory:
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1
2> MANDATORY REOPEN=60';
When
the network is back up again, log apply services can detect and resolve the
archive gaps automatically when the physical standby database resumes Redo
Apply.
In
some SQL statements, the user has the option of
specifying the NOLOGGING clause, which indicates that the database
operation is not logged in the online redo log file. Even though the user
specifies the clause, a redo record is still written to the online redo log
file. However, there is no data associated with this record. This can result in
log application or data access errors at the standby site and manual recovery
might be required to resume applying log files.
Note:
To avoid these problems, Oracle recommends that
you always specify the FORCE
LOGGING clause in the CREATE
DATABASE orALTER
DATABASE statements. See the Oracle Database Administrator's Guide.
For
logical standby databases, when
SQL Apply encounters a redo record for an operation performed with
the NOLOGGING clause, it skips over the record and continues applying
changes from later records. Later, if an attempt is made to access one of the
records that was updated with NOLOGGING in effect, the following
error is returned: ORA-01403 no data found
To
recover after the NOLOGGING clause is specified, re-create one or
more tables from the primary database, as described in Section 9.4.6.
Note:
In general, use of
the NOLOGGING clause is not recommended. Optionally, if you know in
advance that operations using theNOLOGGING clause will be performed on
certain tables in the primary database, you might want to prevent the
application of SQL statements associated with these tables to the logical
standby database by using the DBMS_LOGSTDBY.SKIP procedure.
When
the archived redo log file is copied to the standby site and applied to the
physical standby database, a portion of the datafile is unusable and is marked
as being unrecoverable. When you either fail over to the physical standby
database, or open the standby database for read-only access, and attempt to
read the range of blocks that are marked as UNRECOVERABLE,
you will see error messages similar to the following:
ORA-01578: ORACLE data
block corrupted (file # 1, block # 2521)
ORA-01110: data file 1:
'/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block
was loaded using the NOLOGGING option
To
recover after the NOLOGGING clause is specified, you need to copy the
datafile that contains the missing redo data from the primary site to the
physical standby site. Perform the following steps:
Follow
these steps:
2.
SQL> SELECT NAME,
UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
3.
NAME
UNRECOVERABLE
4.
-----------------------------------------------------
-------------
5.
/oracle/dbs/tbs_1.dbf 5216
6.
/oracle/dbs/tbs_2.dbf 0
7.
/oracle/dbs/tbs_3.dbf 0
8.
/oracle/dbs/tbs_4.dbf 0
9.
4 rows selected.
- Query the standby database:
11.
SQL> SELECT NAME,
UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
12.
NAME
UNRECOVERABLE
13.
-----------------------------------------------------
-------------
14.
/oracle/dbs/stdby/tbs_1.dbf 5186
15.
/oracle/dbs/stdby/tbs_2.dbf 0
16.
/oracle/dbs/stdby/tbs_3.dbf 0
17.
/oracle/dbs/stdby/tbs_4.dbf 0
18.
4 rows selected.
- Compare the query results of
the primary and standby databases.
Compare the value of the UNRECOVERABLE_CHANGE# column in
both query results. If the value of the UNRECOVERABLE_CHANGE# column
in the primary database is greater than the same column in the standby
database, then the datafile needs to be copied from the primary site to the
standby site.
In this example, the value of
the UNRECOVERABLE_CHANGE# in the primary database for
the tbs_1.dbf datafile is greater, so you need to copy
thetbs_1.dbf datafile to the standby site.
Issue
the following SQL statements:
SQL> EXIT;
% cp tbs_1.dbf /backup
SQL> ALTER TABLESPACE
system END BACKUP;
Copy
the datafile that contains the missing redo data from the primary site to
location on the physical standby site where files related to recovery are
stored.
Issue
the following SQL statement:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
You
might get the following error messages (possibly in the alert log) when you try
to restart Redo Apply:
ORA-00308: cannot open
archived log 'standby1'
ORA-27037: unable to
obtain file status
SVR4 Error: 2: No such
file or directory
Additional information:
3
ORA-01547: warning:
RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was
not restored from a sufficiently old backup
ORA-01110: data file 1:
'/oracle/dbs/stdby/tbs_1.dbf'
If
you get the ORA-00308 error and Redo Apply does not terminate
automatically, you can cancel recovery by issuing the following statement from
another terminal window:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE CANCEL;
These
error messages are returned when one or more log files in the archive gap have
not been successfully applied. If you receive these errors, manually resolve
the gaps, and repeat Step 4. See Section 5.8.4 for information about manually resolving
an archive gap.
If you performed unrecoverable operations on your primary
database, determine if a new backup operation is required by following these
steps:
- Query
the V$DATAFILE view on the primary database to determine
the system change number (SCN) or the time at which the
Oracle database generated the most recent invalidated redo data.
- Issue the following SQL
statement on the primary database to determine if you need to perform
another backup:
3.
SELECT
UNRECOVERABLE_CHANGE#,
4.
TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy
hh:mi:ss')
5.
FROM V$DATAFILE;
- If the query in the previous
step reports an unrecoverable time for a datafile that is more recent than
the time when the datafile was last backed up, then make another backup of
the datafile in question.
See Oracle Database Reference for more information about
the V$DATAFILE view.
An archive gap is a range of
archived redo log files created whenever you are unable to apply the next
archived redo log file generated by the primary database to the standby
database. This section contains the following topics:
- What Causes Archive Gaps?
- Determining If an Archive Gap Exists
- Manually Transmitting Log Files in the
Archive Gap to the Standby Site
- Manually Applying Log Files in the
Archive Gap to the Standby Database
Note:
Typically, archive gaps are resolved automatically without the
need for manual intervention. See Section 5.8 for more information about how log apply services
automatically recover from gaps in the archived redo log files.
An archive gap can occur whenever the primary
database archives the current online redo log file locally, but the redo data
is not archived at the standby site. Because the standby database requires the
sequential application of log files, media recovery stops at the first missing
log file encountered.
- Creation of the Standby Database
- Shutdown of the Standby Database When the
Primary Database Is Open
- Network Failure Prevents Transmission of
Redo
One
example of an archive gap occurs when you create the standby database from an
old backup. For example, if the standby database is made from a backup that
contains changes through log file 100, and the primary database currently
contains changes through log file 150, then the standby database requires that
you apply log files 101 to 150. Another typical example of an archive gap
occurs when you generate the standby database from a hot backup of an open
database.
For
example, assume the scenario illustrated in Figure 12-8.
The
following steps occur:
- You take a hot backup
of primary database.
- At time t, while
you are busy configuring the network files, primary archives log
files, sequences 4 and 5.
- At time t + 1, you
start the standby instance.
- primary archives redo log
files with sequences 6, 7, and 8 on the primary site, and transmits the
redo to the standby site.
Archived
redo log file sequences 4 and 5 are now part of an archive gap, and these log
files must be applied to the standby database.
You
might be required to shut down the standby database to resolve maintenance
issues. For example, you must shut down the standby database when you change a
control file parameter, such as MAXDATAFILE, in the primary database.
- Start the standby databases and
listeners before starting the primary database.
- Shut down the primary
database before shutting down the standby database.
If
you violate either of these two rules, then the standby database is down while
the primary database is open and archiving. Consequently, the Oracle database
can create an archive gap.
Note:
If the standby site is specified
as MANDATORY in one of the LOG_ARCHIVE_DEST_n parameters
of the primary initialization parameter file, dynamically change it to OPTIONAL before
shutting down the standby database. Otherwise, the primary database eventually
stalls because it cannot archive its online redo log files.
If
you maintain a Data Guard environment, and the network goes down, the primary
database might continue to write to disk but be unable to send redo to the
standby site. In this situation, archived redo log files accumulate as usual on
the primary site, but the standby database is unaware of them.
See:
- Section 5.7.2 for a detailed account of the significance of
the OPTIONAL and MANDATORY attributes for standby
archival
- Section 12.9 for a related scenario
To determine if there is an archive gap, query
the V$ARCHIVED_LOG and V$LOG views. If an archive gap
exists, the output of the query specifies the thread number and log sequence
number of all log files in the archive gap. If there is no archive
gap for a given thread, the query returns no rows.
Query
the V$ARCHIVED_LOG and V$LOG views
on the standby database. For example, the following query shows there is a
difference in the RECD and SENTsequence numbers for the
destination specified by DEST_ID=2, indicating that there is a gap:
SQL> SELECT
MAX(R.SEQUENCE#) LAST_SEQ_RECD, MAX(L.SEQUENCE#) LAST_SEQ_SENT FROM
2> V$ARCHIVED_LOG R, V$LOG L WHERE
3> R.DEST_ID=2 AND L.ARCHIVED='YES';
LAST_SEQ_RECD
LAST_SEQ_SENT
-------------
-------------
7 10
Use
the following query to determine the names of the archived redo log files on
the local system that must be copied to the standby system that has the gap:
SQL> SELECT NAME FROM
V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND
2> SEQUENCE# BETWEEN 7 AND 10;
NAME
--------------------------------------------------------------------------------
/primary/thread1_dest/arcr_1_7.arc
/primary/thread1_dest/arcr_1_8.arc
/primary/thread1_dest/arcr_1_9.arc
/primary/thread1_dest/arcr_1_10.arc
After you have obtained
the sequence numbers of the log files in the archive gap, you can obtain their
filenames by querying the V$ARCHIVED_LOG view on the primary site.
The archived redo log path names on the standby site are generated by the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT parameters
in the standby initialization parameter file.
If
the standby database is on the same site as the primary database, or the
standby database is on a remote site with a different directory structure than
the primary database, the path names for the log files on the standby site
cannot be the same as the path names of the log files archived by the primary
database. Before transmitting the redo data to the standby site, determine the
correct path names for the archived redo log files at the standby site.
- Review the list of archive gap
log files that you obtained earlier. For example, assume you have the
following archive gap:
2.
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
3.
----------
------------- --------------
4.
1 460 463
5.
2 202 204
6.
3 100 100
If a thread appears in the view, then it contains an archive gap.
You need to copy log files from threads 1, 2, and 3.
- Determine the path names of the
log files in the archive gap that were transmitted by the primary
database. After connecting to the primary database, issue a SQL query to
obtain the name of a log file in each thread. For example, use the
following SQL statement to obtain filenames of log files for thread 1:
8.
SQL> SELECT NAME FROM
V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1
9.
2> AND SEQUENCE# > 459 AND SEQUENCE#
< 464;
10.
11.
NAME
12.
---------------------------------------------------------------------
13.
/primary/thread1_dest/arcr_1_460.arc
14.
/primary/thread1_dest/arcr_1_461.arc
15.
/primary/thread1_dest/arcr_1_462.arc
16.
/primary/thread1_dest/arcr_1_463.arc
17.
4 rows selected
Perform similar queries for threads 2 and 3.
- On the standby site, review the
settings
for STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT in the
standby initialization parameter file. For example, you discover the
following:
19.
STANDBY_ARCHIVE_DEST =
/standby/arc_dest/
20.
LOG_ARCHIVE_FORMAT =
log_%t_%s_%r.arc
These parameter settings determine the filenames of the archived
redo log files at the standby site.
- On the primary site, copy the
log files in the archive gap from the primary site to the standby site,
renaming them according to values
forSTANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT. For example,
enter the following copy commands to copy the archive gap log files
required by thread 1:
22.
% cp
/primary/thread1_dest/arcr_1_460.arc /standby/arc_dest/log_1_460.arc
23.
% cp
/primary/thread1_dest/arcr_1_461.arc /standby/arc_dest/log_1_461.arc
24.
% cp
/primary/thread1_dest/arcr_1_462.arc /standby/arc_dest/log_1_462.arc
25.
% cp
/primary/thread1_dest/arcr_1_463.arc /standby/arc_dest/log_1_463.arc
Perform similar commands to copy archive gap log files for threads
2 and 3.
- On the standby site, if
the LOG_ARCHIVE_DEST and STANDBY_ARCHIVE_DEST parameter
values are not the same, then copy the archive gap log
files from the STANDBY_ARCHIVE_DEST directory to
the LOG_ARCHIVE_DEST directory. If these parameter values are the
same, then you do not need to perform this step.
For example, assume the following standby initialization parameter
settings:
STANDBY_ARCHIVE_DEST =
/standby/arc_dest/
LOG_ARCHIVE_DEST =
/log_dest/
Because the parameter values are different, copy the archived redo
log files to the LOG_ARCHIVE_DEST location:
% cp /standby/arc_dest/*
/log_dest/
When you initiate manual recovery, the Oracle database looks at
the LOG_ARCHIVE_DEST value to determine the location of the log
files.
Now
that all required log files are in
the STANDBY_ARCHIVE_DEST directory, you can proceed to Section 12.11.4 to apply the archive gap log files to the
standby database. See also Section 8.5.4.4 and the V$ARCHIVED_LOG view
in Chapter 16.
After you have copied the log files in the
archive gap to the standby site, you can apply them
using the RECOVER AUTOMATIC statement.
- Start up and mount the standby
database (if it is not already mounted). For example, enter:
2.
SQL> STARTUP MOUNT
PFILE=/oracle/admin/pfile/initSTBY.ora
- Recover the database using
the AUTOMATIC option:
4.
SQL> ALTER DATABASE
RECOVER AUTOMATIC STANDBY DATABASE;
The AUTOMATIC option automatically generates the name of
the next archived redo log file needed to continue the recovery operation.
After recovering the available log files, the Oracle database
prompts for the name of a log file that does not exist. For example, you might
see:
ORA-00308: cannot open
archived log '/oracle/standby/standby_logs/arcr_1_540.arc'
ORA-27037: unable to
obtain file status
SVR4 Error: 2: No such
file or directory
Additional information:
3
Specify log:
{<RET>=suggested | filename | AUTO | CANCEL}
- Cancel recovery after the
Oracle database applies the available log files by typing CTRL/C:
6.
SQL> <CTRL/C>
7.
Media recovery
cancelled.
The following error messages are acceptable after recovery
cancellation and do not indicate a problem:
ORA-01547: warning:
RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs
more recovery to be consistent
ORA-01110: data file 1:
'some_filename'
ORA-01112: media
recovery not started
- After you finish manually
applying the missing log file, you can restart log apply services on the
standby database, as follows:
SQL> ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Chapter 3 and Chapter 4 described how to create physical and logical standby
databases. This section augments the discussions in those chapters with
additional steps that must be performed if the primary database uses Oracle
Managed Files (OMF) or Automatic Storage Management (ASM).
Note:
The discussion in this
section is presented at a level of detail that assumes the reader already knows
how to create a physical standby database and is an experienced user of the
RMAN, OMF, and ASM features. For more information, see:
·
Chapter 3, Chapter 4, and Appendix F for information about creating physical and logical standby
databases
·
Oracle Database Administrator's Guide for information about OMF and ASM
·
Oracle Database Backup and Recovery Advanced
User's Guide and Oracle Database Backup and Recovery Referencefor information about RMAN
Perform
the following tasks to prepare for standby database creation:
- Enable forced logging on the
primary database.
- Enable archiving on the primary
database.
- Set all necessary
initialization parameters on the primary database.
- Create an initialization parameter
file for the standby database.
- If the primary database is
configured to use OMF, then Oracle recommends that the standby database be
configured to use OMF, too. To do this, set
the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n initialization
parameters to appropriate values. Maintenance and future role transitions
are simplified if the same disk group names are used for both the primary
and standby databases.
- Set
the STANDBY_FILE_MANAGEMENT initialization parameter
to AUTO.
- Configure Oracle Net, as
required, to allow connections to the standby database.
- Create a remote login password
file for the standby database. Use the same password for the SYS account
as on the primary database.
- Start the standby database
instance without mounting the control file.
Perform
the following tasks to create the standby database:
- If the standby database is
going to use ASM, create an ASM instance if one does not already exist on
the standby database system.
- Use the
RMAN BACKUP command to create a backup set that contains a copy
of the primary database's datafiles, archived log files, and a standby
control file.
- Use the RMAN DUPLICATE …
FOR STANDBY command to copy the datafiles, archived redo log files
and standby control file in the backup set to the standby database's
storage area.
The DUPLICATE … FOR STANDBY command performs the actual
data movement at the standby instance. If the backup set is on tape, the media
manager must be configured so that the standby instance can read the backup
set. If the backup set is on disk, the backup pieces must be readable by the
standby instance, either by making their primary path names available through
NFS, or by copying them to the standby system and using RMAN CATALOG
BACKUPPIECE command to catalog the backup pieces before restoring them.
After
you successfully complete these steps, continue with the steps in Section 3.2.7, to verify the configuration of the physical
standby database.
To
create a logical standby database, continue with the standby database creation
process described in Chapter 4, but with the following modifications:
- For a logical standby database,
setting the DB_CREATE_FILE_DEST parameter does not force the
creation of OMF filenames. However, if this parameter was set on the
primary database, it must also be set on the standby database.
- After creating a logical
standby control file on the primary system, do not use an operating system
command to copy this file to the standby system. Instead, use the
RMAN RESTORE CONTROLFILE command to restore a copy of the
logical standby control file to the standby system.
- If the primary database uses
OMF files, use RMAN to update the standby database control file to use the
new OMF files created on the standby database. To perform this operation,
connect only to the standby database, as shown in the following example:
4.
> RMAN TARGET
sys/oracle@lstdby
5.
RMAN> CATALOG START
WITH '+stby_diskgroup';
6.
RMAN> SWITCH DATABASE
TO COPY;
No comments:
Post a Comment