Friday, January 29, 2016

How to Perform Oracle Data Pump with Different-2 Scenario on 11g

Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another. Oracle Data Pump is available only on Oracle Database 10g release 1 (10.1) and later.Oracle Data Pump is made up of three distinct parts:

-The command-line clients, expdp and impdp
-The DBMS_DATAPUMP PL/SQL package (also known as the Data Pump API)
-The DBMS_METADATA PL/SQL package (also known as the Metadata API)

For Monitoring Job Status of Data Pump, Please select a query on DBA_DATAPUMP_JOBS,  USER_DATAPUMP_JOBS, or DBA_DATAPUMP_SESSIONS views. V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.

## How to estimate the size of Database
expdp FULL=Y ESTIMATE_ONLY=Y

## Perform Full Database Export/Import (We can perform any Import Operation if we have Full Database Dump.)
expdp system/password DIRECTORY=DIR_NAME DUMPFILE=Dump_File_Name.dmp LOGFILE=log_file.log FULL=Y
impdp system/password DIRECTORY=DIR_NAME DUMPFILE=Dump_File_Name.dmp LOGFILE=Imp_log_file.log FULL=Y

## On Specific Schema Export/Import.
expdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log SCHEMAS=user1
impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=Imp_log_file.log SCHEMAS=user1
impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=Imp_log_file.log SCHEMAS=user1 REMAP_SCHEMA=user1:user2 -Use Remap if username is Different

## Schema Table Export/Import from User1.table  to User2.
expdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log TABLES=user1.table1
impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=Imp_log_file.log TABLES=user1.table1
impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=Imp_log_file.log TABLES=user1.table1 REMAP_SCHEMA=user1:user2 

## Perform Export/Import on Quey

expdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log TABLES=user1.table1 QUERY=user1.table1:”WHERE Sex='Female'”
impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=Imp_log_file.log TABLES=user1.table1 REMAP_TABLE=user1.table1:table2  QUERY=user1.table1:”WHERE name like '%R%'”

Example: 
QUERY=user1.table1:"where DATE > '01-DEC-2013'"
QUERY=user1.table1:"where Department='IT' AND Location='Bangalore'"

## Perform Exclude/Include import/Export

expdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log FULL=Y EXCLUDE=TABLE:"IN('TABLE1','TABLE2')"

impdp system/password DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log SCHEMAS=user1 REMAP_SCHEMA=user1:user2 INCLUDE=TABLE:"IN('TABLE1','TABLE2')"

impdp DIRECTORY=EXP_DIR DUMPFILE=export_file.dmp LOGFILE=log_file.log SCHEMAS=User1 INCLUDE=INDEX SQLFILE=EXP_DIR:domain_idx.sql -To Generate DDL from from dump.

Example:
INCLUDE=TABLE:" like '%TAB%'"

## Take Import/Export Using Parameter (.par) File
vi imp_schema_parfile.par

directory=EXP_DIR
dumpfile=Dump_file.dmp
logfile=Dump_file.log
schemas=Rajesh
Version=COMPATIBLE
remap_schema=rajesh:rahul 
remap_tablespace=VGSM_STATIC:SM_STATIC,VGSM_INDEX:SM_STATIC_I
table_exists_action=truncate
PARALLEL=4 
tables=rajesh.table1, rajesh.table2, rajesh.table3
IGNORE=Y
:wq

impdp parfile=imp_schema_parfile.par
In the same way you can use export as well.

## Take Import/Export Using "NETWORK_LINK "Parameter.
Network_link parameter is very useful, when you've shortage of space in File system while performing import/export. with help of it you can take export in different server as well and you can import the same from there to target server.

Example:

Create a Database link on server(source/target) where you are performing export/import.

CREATE DATABASE LINK SYS_DBLNK CONNECT TO system IDENTIFIED BY password USING '(DESCRIPTION =(ADDRESS_LIST=(ADDRESS=(COMMUNITY=TCP)(PROTOCOL=TCP)(Host=servername)(Port=1521)))(CONNECT_DATA=(SID=REMORTE)))';
OR
CREATE DATABASE LINK SYS_DBLNK CONNECT TO system IDENTIFIED BY password USING 'REMORTE';

nohup expdp system/password directory=EXP_DIR dumpfile=file.dmp logfile=file.log SCHEMAS=User1,User2 network_link=SYS_DBLNK & -Taking .dmp file in different location
                   
nohup impdp system/password directory=EXP_DIR dumpfile=file.dmp logfile=file.log SCHEMAS=User1,User2 network_link=SYS_DBLNK & -Import .dmp file to target DB from .dmp file location
                   
nohup impdp system/password directory=EXP_DIR logfile=file.log SCHEMAS=User1,User2 network_link=SYS_DBLNK & -import data without .dmp file


## How to Run a job in background via shell script using par file or a Single line
nohup impdp '"/ as sysdba"' parfile=imp_schema_parfile.par &
nohup expdp '"/ as sysdba"' parfile=exp_schema_parfile.par &

OR
vi impdp_schema_shell.sh
echo "File import Starting at: `date`"
impdp '"/ as sysdba"' parfile=imp_schema_parfile.par
echo "File import Finish at: `date`"
:wq

nohup sh impdp_schema_shell.sh > impdp_schema_shell.log &


## Check status for running job both side from sql and file level
Find the Job name from import/export .log file location or also you can specify your own job name with JOB_NAME parameter while running job. And to for find a currently running job follow below data dictionary view as well.

SQL> select * from dba_datapump_jobs;
OR 
expdp "/ as sysdba" attach='' -At prompt type "status"
impdp "/ as sysdba" attach='' -At prompt type "status"


## How to kill running export/import job

expdp "/ as sysdba" attach='' -At prompt type "kill_job" Enter "Yes"
impdp "/ as sysdba" attach='' -At prompt type "kill_job" Enter "Yes"


## Create Datapump Directory in Oracle
CREATE DIRECTORY exp_dir AS '/path1/path2/';
CREATE OR REPLACE DIRECTORY exp_dir AS '/path1/path2/';
Grant read,write on DIRECTORY exp_dir to username|public;


Some Commonly Used Parameters are:
-COMPRESSION: Specifies whether to compress metadata before writing to the dump file set
-CONTENT: Enables you to filter what Export unloads: data only, metadata only, or both.
-DIRECTORY: Specifies the default location to which Export can write the dump file set and the log file
-DUMPFILE: Specifies the names, and optionally, the directory objects of dump files for an export job.
-LOGFILE: Specifies the name, and optionally, a directory, for the log file of the export job.
-ESTIMATE_ONLY: Instructs Export to estimate the space that a job would consume, without actually performing the export operation.
-EXCLUDE: Enables you to filter the metadata that is exported by specifying objects and object types that you want excluded from the export operation.
-FULL: Specifies that you want to perform a full database mode export.
-INCLUDE: Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.
-PARFILE: Specifies the name of an export parameter file.
-QUERY: Enables you to filter the data that is exported by specifying a clause for a SQL SELECT statement, which is applied to all tables in the export job or to a specific table
-SCHEMAS: Specifies that you want to perform a schema-mode export. This is the default mode for Export.
-TABLES: Specifies that you want to perform a table-mode export.
-REMAP_SCHEMA: Loads all objects from the source schema into a target schema.
-REMAP_TABLE: Allows you to rename tables during an import operation performed with the transportable method (REMAP_TABLE=hr.employees:emps)
-REMAP_TABLESPACE: Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.
-TABLE_EXISTS_ACTION: Tells Import what to do if the table it is trying to create already exists.