Transportable Tablespace on Physical Standby Database

SITUATION:

We need refresh one schema with data about 300 GB from PROD database to TEST database for testing purpose. The production database is 7×24, but there is a physical standby database. so we use physical standby database for transportable tablespace activities without touching and impacting on 7×24 production environment. The physical standby database and test database are sitting on the same grid infrastructure. The schema has only one tablespace.

It will take up to 10 hours by using normal datapump export and import. Instead, it will only take half an hour by using transportable tablespace on physical standby database without impacting on primary  database.

ENVIRONMENT:

Operating System : RedHat Linux 6
Oracle Grid Infrastructure : 4 nodes 12.1.0.1 Clustware
Oracle RAC PROD database : 11.2.0.4
Oracle RAC TEST database : 11.2.0.4
Schema & Tablespace Name : TRANSPORT

STEPS:

  • Create a new schema if you want to keep the original name, otherwise system will generate a new one automatically by using parameter “REMAP_SCHEMA” when importing.
  • Export schema metadata excluding tables and indexes on source database

$expdp system/password  directory=EXP_DIR dumpfile=transport_meta_no_table_index.dmp content=metadata_only schemas=transport  exclude=table exclude=index

  • Check and make sure the standby is in SYNC with primary

Query data dictionary view V$ARCHIVE_GAP on standby database
SQL>select THREAD#,LOW_SEQUENCE#,HIGH_SEQUENCE# from V$ARCHIVE_GAP;

Note: there are lots of bugs and performance issues for this view on some versions like 11.2.0.4

b: If you have set up Data Guard for this database, it should be easy to check:

DGMGRL> show database verbose “STNBYDB”;

Database – STNBYDB

Enterprise Manager Name: STNBYDB
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Apply Rate: 9.41 MByte/s
Real Time Query: OFF
Instance(s):
STNBYDB1
STNBYDB2
STNBYDB3
STNBYDB4 (apply instance)

  • Check and set the value of db_recovery_file_dest_size on standby

SQL> show parameter recovery

NAME TYPE VALUE
———————————— ———– ——————————
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000G

  • Cancel the MRP(Managed recover process)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Database altered.

  • Create guaranteed restore points(GRP) on standby

SQL> CREATE RESTORE POINT refresh_transport GUARANTEE FLASHBACK DATABASE;

Restore point created.

  • On primary defer the log_archive_dest_2, which is sending logs to standby

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

System altered.

  • Check the protection mode if it is MAX_AVAILABILITY AND MAX_PROTECTION then downgrade it to MAX_PERFORMANCE by:

SQL> select PROTECTION_MODE,PROTECTION_LEVEL from v$database;

PROTECTION_MODE PROTECTION_LEVEL
——————– ——————–
MAX_PROTECTION MAX_PROTECTION

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

Database altered.

  • Activate and open the standby database

SQL> select open_Mode from v$database;

OPEN_MODE
——————–
MOUNTED

SQL> ALTER DATABASE OPEN;

Database altered.

  • Check that the tablespace will be self-contained

SQL>execute sys.dbms_tts.transport_set_check(‘TRANSPORT’, true);
SQL> select * from sys.transport_set_violations;

==>These violations must be resolved before the tablespaces can be transported

  • Place the tablespace in read only mode

SQL> alter tablespace TRANSPORT read only;

Tablespace altered.

  • Do the Metadata export, and copy files

$ expdp system/password dumpfile=transport_refresh.dmp directory=EXP_DIR transport_tablespaces = TRANSPORT transport_full_check=y

Note: The parameters are different for different versions, like for 10g: transport_tablespace=y tablespaces=TRANSPORT

RMAN> copy datafile 20 to ‘+DG2’; ( where DG2 is the diskgroup for TEST database )

  • Check the endianness of the target database and convert, if necessary ( NOT applicable to this situation )

SQL>select * from v$transportable_platform order by platform_id;

if source and target Endianness are the same , File conversion is NOT needed ( please refer to oracle doc for other requirements )

if source and target Endianness are different , the file need to be converted by using RMAN on source server:

RMAN> convert tablespace TRANSPORT to platform=’new platform name’ …

on target server:

RMAN> CONVERT DATAFILE .. to PLATFORM=’ target platform name’ from platform=’ source platform name’

  • Import TTS metadata

$ impdp system/password  DUMPFILE=transport_refresh.dmp DIRECTORY=EXP_DIR TRANSPORT_DATAFILES=’+dg3/TEST/DATAFILE/TRANSPORT.588.860329689′

It’s not possible to import when tablespace already exists or when target schema is not created. If users don’t exist, DataPump provides an alternative by using remap_schema (for import utility we can create the schema) ie:

REMAP_SCHEMA=<source_user>:<target_user>

If tablespace already exists in target, we can use remap_tablespace parameter on impdp (there is no option in import but rename tablespace at source or the existing
one at target).

REMAP_TABLESPACE=(<source_tbs1>:<target_tbs1>,<source_tbs2>:<target_tbs2>,…)

  • Place the standby database in mount mode again

SQL> STARTUP MOUNT;

  • Flashback to GRP and again shutdown and starup the standby database

SQL> FLASHBACK DATABASE TO RESTORE POINT refresh_transport;
Flashback completed.

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Database altered.

  • Start up the MRP again if not yet

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  DISCONNECT USING CURRENT LOGFILE;

Database altered.

  • Enable the log_archive_dest_2 on primary , and check the standby database SYNC status as per previous step.

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

System altered.

  • Drop guaranteed restore points(GRP):

RMAN> LIST RESTORE POINT ALL;

using target database control file instead of recovery catalog
SCN RSP Time Type Time Name
—————- ——————- ———- ——————- —-
6836263465366 GUARANTEED 2014-10-07 11:55:19 REFRESH_TRANSPORT

SQL> DROP RESTORE POINT TRANSPORT_REFRESH;

Restore point dropped.

  • Import schema meta objects like packages, procedures and functions,etc

SQL> select object_type,status,count(*) from dba_objects where owner=’TRANSPORT’ group by object_type,status;

OBJECT_TYPE STATUS COUNT(*)
——————- ——- ———-
TRIGGER INVALID 6
TABLE VALID 175
INDEX VALID 383
LOB VALID 10

      $impdp system/password  directory=EXP_DIR dumpfile=transport_meta_no_table_index.dmp content=metadata_only schemas=transport  exclude=table exclude=index

SQL> select object_type,status,count(*) from dba_objects where owner=’TRANSPORT’ group by object_type,status;

OBJECT_TYPE STATUS COUNT(*)
——————- ——- ———-
FUNCTION VALID 1
SYNONYM VALID 15
PACKAGE VALID 20
SEQUENCE VALID 6
TRIGGER VALID 6
TABLE VALID 175
PACKAGE BODY VALID 18
INDEX VALID 383
LOB VALID 10

VIEW VALID 11

10 rows selected.

Advertisements
This entry was posted in TTS and tagged , , , . Bookmark the permalink.

2 Responses to Transportable Tablespace on Physical Standby Database

  1. Hi James,

    Just a couple of questions here:

    RMAN> copy datafile 20 to ‘+DG2’;
    – This make a physical copy of datafile 20 to ‘+DG2’. and ‘+DG2’ is the one used by the TEST database, is that correct?

    Import TTS metadata
    – This will create the new user in the process, is that correct?

    Like

    • james huang says:

      Hi ,

      I did it a couple of years ago. I try to answer your questions as below :

      RMAN> copy datafile 20 to ‘+DG2’;
      – This make a physical copy of datafile 20 to ‘+DG2’. and ‘+DG2’ is the one used by the TEST database, is that correct?

      James –> Correct

      Import TTS metadata
      – This will create the new user in the process, is that correct?

      James–> It creates new user only if you specify parameter “REMAP_SCHEMA”. otherwise you need manually create the same user as in production db.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s