Wednesday, May 18, 2016

PASSWORD RESET

create user Moid identified by abc123 password expire;
2
SQL>alter user scott identified by india123;
3How to check last password change date in Oracle?
set linesize 200
set pagesize 200
col Last_Password_Change_Time format a30
select
     user#,
     name,
     ctime as Creation_Time,
     --to_char(ctime,'DD-MON-YY HH:MI') as Creation_Time,
     to_char(ptime,'DD-MON-YY HH:MI') as Last_Password_Change_Time
from
     SYS.USER$
where
     ctime is not null
     and
     name not in (select role from dba_roles)
order by ptime;

4
to unlock
alter user username account lock;
 
5
alter user username identified by new_password;
 
 
. oraenv 
PrimeDG
echo $ORACLE_SID
sqlplus / as sysdba
SQL>show user;
SQL>select name from v$database;
SQL>select username from dba_users where username='Amin';
SQL>select account_status from dba_users where username='Amin';
SQL>alter user Amin identified by india123 password expire;
or 
SQL>alter user scott identified by india123;

 

Sunday, May 8, 2016

SQL> select  c.owner ||','||c.table_name ||'('||c.column_name||')-'|| c.data_typ                       e ||''col
from dba_tab_cols c,dba_objects o where c.data_type like '% WITH TIME ZONE' and                        c.owner=o.owner
and c.table_name=o.object_name
and o.object_type='TABLE'
order by col
/  2    3    4    5    6

COL
--------------------------------------------------------------------------------
DBSNMP,MGMT_DB_FEATURE_LOG(LAST_UPDATE_DATE)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_ORDERS_QUEUETABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_ORDERS_QUEUETABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_ORDERS_QUEUETABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_ORDERS_QUEUETABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_STREAMS_QUEUE_TABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_STREAMS_QUEUE_TABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_STREAMS_QUEUE_TABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
IX,AQ$_STREAMS_QUEUE_TABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,ALERT_QT(SYS_NC00029$)-TIMESTAMP(0) WITH TIME ZONE
SYS,AQ$_ALERT_QT_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,AQ$_ALERT_QT_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_ALERT_QT_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_ALERT_QT_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ$_MEM_MC_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ$_MEM_MC_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ$_MEM_MC_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ$_MEM_MC_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ_PROP_TABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ_PROP_TABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ_PROP_TABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_AQ_PROP_TABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,AQ$_KUPC$DATAPUMP_QUETAB_1_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_1_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_1_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_1_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_KUPC$DATAPUMP_QUETAB_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_EVENT_QTAB_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_EVENT_QTAB_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_EVENT_QTAB_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,AQ$_SCHEDULER$_EVENT_QTAB_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_REMDB_JOBQTAB_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_REMDB_JOBQTAB_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_REMDB_JOBQTAB_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER$_REMDB_JOBQTAB_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZON
E

SYS,AQ$_SCHEDULER_FILEWATCHER_QT_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER_FILEWATCHER_QT_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER_FILEWATCHER_QT_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SCHEDULER_FILEWATCHER_QT_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZON

COL
--------------------------------------------------------------------------------
E

SYS,AQ$_SUBSCRIBER_TABLE(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SUBSCRIBER_TABLE(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SUBSCRIBER_TABLE(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SYS$SERVICE_METRICS_TAB_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SYS$SERVICE_METRICS_TAB_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SYS$SERVICE_METRICS_TAB_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,AQ$_SYS$SERVICE_METRICS_TAB_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,FGR$_FILE_GROUPS(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,FGR$_FILE_GROUP_FILES(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,FGR$_FILE_GROUP_VERSIONS(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_AUTOTASK_STATUS(ABA_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_AUTOTASK_STATUS(ABA_STATE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_AUTOTASK_STATUS(MW_RECORD_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_AUTOTASK_STATUS(MW_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_AUTOTASK_STATUS(RECONCILE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_CLIENT_CONFIG(FIELD_2)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_CLIENT_CONFIG(LAST_CHANGE)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_CLIENT_TASKS(CURR_WIN_START)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_CLIENT_TASKS(LG_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,KET$_CLIENT_TASKS(LT_DATE)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,OPTSTAT_HIST_CONTROL$(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,OPTSTAT_HIST_CONTROL$(SVAL2)-TIMESTAMP(6) WITH TIME ZONE
SYS,OPTSTAT_USER_PREFS$(CHGTIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,REG$(NTFN_GROUPING_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,REG$(REG_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_EVENT_LOG(LOG_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_EVENT_QTAB(SYS_NC00032$)-TIMESTAMP(0) WITH TIME ZONE
SYS,SCHEDULER$_EVENT_QTAB(SYS_NC00044$)-TIMESTAMP(0) WITH TIME ZONE
SYS,SCHEDULER$_EVENT_QTAB(SYS_NC00045$)-TIMESTAMP(0) WITH TIME ZONE
SYS,SCHEDULER$_FILEWATCHER_HISTORY(LAST_CHECK_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_FILEWATCHER_RESEND(INS_TSTAMP)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,SCHEDULER$_FILE_WATCHER(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_GLOBAL_ATTRIBUTE(ATTR_TSTAMP)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(LAST_ENABLED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(LAST_END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(LAST_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(NEXT_RUN_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_DESTINATIONS(LAST_ENABLED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_DESTINATIONS(LAST_END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_DESTINATIONS(LAST_START_DATE)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,SCHEDULER$_JOB_DESTINATIONS(NEXT_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_RUN_DETAILS(LOG_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_RUN_DETAILS(REQ_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_JOB_RUN_DETAILS(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(LAST_ENABLED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(LAST_END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(LAST_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(NEXT_RUN_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_LIGHTWEIGHT_JOB(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_REMDB_JOBQTAB(SYS_NC00041$)-TIMESTAMP(0) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,SCHEDULER$_REMDB_JOBQTAB(SYS_NC00042$)-TIMESTAMP(0) WITH TIME ZONE
SYS,SCHEDULER$_REMOTE_JOB_STATE(LAST_END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_REMOTE_JOB_STATE(LAST_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_REMOTE_JOB_STATE(NEXT_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_SCHEDULE(END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_SCHEDULE(REFERENCE_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_STEP_STATE(END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_STEP_STATE(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW(ACTUAL_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW(END_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW(LAST_START_DATE)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,SCHEDULER$_WINDOW(MANUAL_OPEN_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW(NEXT_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW_DETAILS(LOG_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW_DETAILS(REQ_START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER$_WINDOW_DETAILS(START_DATE)-TIMESTAMP(6) WITH TIME ZONE
SYS,SCHEDULER_FILEWATCHER_QT(SYS_NC00033$)-TIMESTAMP(0) WITH TIME ZONE
SYS,TSM_DST$(DST_END_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,TSM_DST$(DST_INST_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,TSM_DST$(DST_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,TSM_SRC$(SRC_END_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,TSM_SRC$(SRC_INST_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,TSM_SRC$(SRC_START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_ALERT_HISTORY(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_ALERT_HISTORY(TIME_SUGGESTED)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_ALERT_OUTSTANDING(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_ALERT_OUTSTANDING(TIME_SUGGESTED)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_AUX_HISTORY(SAVTIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_AUX_HISTORY(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_HISTGRM_HISTORY(SAVTIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_HISTGRM_HISTORY(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_HISTHEAD_HISTORY(SAVTIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYS,WRI$_OPTSTAT_HISTHEAD_HISTORY(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_IND_HISTORY(SAVTIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_IND_HISTORY(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_OPR(END_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_OPR(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_OPR(START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_TAB_HISTORY(SAVTIME)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRI$_OPTSTAT_TAB_HISTORY(SPARE6)-TIMESTAMP(6) WITH TIME ZONE
SYS,WRR$_REPLAY_DIVERGENCE(TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_LOADER_QTABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_LOADER_QTABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYSMAN,AQ$_MGMT_LOADER_QTABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_LOADER_QTABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_NOTIFY_QTABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_NOTIFY_QTABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_NOTIFY_QTABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,AQ$_MGMT_NOTIFY_QTABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_ASSIGNMENT(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_ASSIGNMENT(START_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_BOOTSERVER(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_CLUSTER_NODES(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_DEFAULT_IMAGE(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
SYSMAN,MGMT_PROV_IP_RANGE(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_NET_CONFIG(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_OPERATION(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_OPERATION(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_RPM_REP(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_STAGING_DIRS(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME ZONE
SYSMAN,MGMT_PROV_SUITE_INST_MEMBERS(LAST_MODIFIED_TIME)-TIMESTAMP(6) WITH TIME Z
ONE

WMSYS,AQ$_WM$EVENT_QUEUE_TABLE_L(DEQUEUE_TIME)-TIMESTAMP(6) WITH TIME ZONE
WMSYS,AQ$_WM$EVENT_QUEUE_TABLE_S(CREATION_TIME)-TIMESTAMP(6) WITH TIME ZONE

COL
--------------------------------------------------------------------------------
WMSYS,AQ$_WM$EVENT_QUEUE_TABLE_S(DELETION_TIME)-TIMESTAMP(6) WITH TIME ZONE
WMSYS,AQ$_WM$EVENT_QUEUE_TABLE_S(MODIFICATION_TIME)-TIMESTAMP(6) WITH TIME ZONE
WMSYS,WM$VERSIONED_TABLES(SYS_NC00020$)-TIMESTAMP(0) WITH TIME ZONE
WMSYS,WM$VERSIONED_TABLES(SYS_NC00021$)-TIMESTAMP(0) WITH TIME ZONE

163 rows selected.

SQL>



SQL> select distinct c.owner from dba_tab_cols c,dba_objects o
where c.data_type like'%WITH TIME ZONE'
and c.owner=o.owner
and c.table_name=o.object_name
and o.object_type='TABLE'
order by 1
/  2    3    4    5    6    7

OWNER
------------------------------
DBSNMP
IX
SYS
SYSMAN
WMSYS

SQL>

Saturday, May 7, 2016

MIGRATION DOCUMENT


SQL> set lines 300
SQL> set timing on
SQL> column paramater format a30

SQL> select parameter,value from nls_database_parameter where parameter=
  2  'NLS_CHARACTERSET' or parameter='NLS_NCHAR_CHARACTERSET';


PARAMETER                      VALUE
------------------------------ ----------------------------------------
NLS_CHARACTERSET               WE8MSWIN1252
NLS_NCHAR_CHARACTERSET         AL16UTF16

Elapsed: 00:00:00.12
SQL>


SQL> set lines 300
SQL> set timing on
column comp_name format a35
column version format a15
column status format a29
select comp_name,version,status from dba_registry;



COMP_NAME                           VERSION         STATUS
----------------------------------- --------------- -----------------------------
OWB                                 11.2.0.1.0      VALID
Oracle Application Express          3.2.1.00.10     VALID
Oracle Enterprise Manager           11.2.0.1.0      VALID
OLAP Catalog                        11.2.0.1.0      VALID
Spatial                             11.2.0.1.0      VALID
Oracle Multimedia                   11.2.0.1.0      VALID
Oracle XML Database                 11.2.0.1.0      VALID
Oracle Text                         11.2.0.1.0      VALID
Oracle Expression Filter            11.2.0.1.0      VALID
Oracle Rules Manager                11.2.0.1.0      VALID
Oracle Workspace Manager            11.2.0.1.0      VALID

COMP_NAME                           VERSION         STATUS
----------------------------------- --------------- -----------------------------
Oracle Database Catalog Views       11.2.0.1.0      VALID
Oracle Database Packages and Types  11.2.0.1.0      VALID
JServer JAVA Virtual Machine        11.2.0.1.0      VALID
Oracle XDK                          11.2.0.1.0      VALID
Oracle Database Java Packages       11.2.0.1.0      VALID
OLAP Analytic Workspace             11.2.0.1.0      VALID
Oracle OLAP API                     11.2.0.1.0      VALID

18 rows selected.

[oracle@PrimeDG admin]$ {ORACLE_HOME}/OPatch/opatch lsinventory -patch -detail -invPtrLoc /var/opt/oracle/oraInst_${ORACLE_SID}_11201.loc
-bash: {ORACLE_HOME}/OPatch/opatch: No such file or directory
[oracle@PrimeDG admin]$



SQL> set linesize 200
SQL> set pagesize 100
SQL> select substr(action_time,1,30) action_time,substr(id,1,8) id, substr(action,1,10)action, substr(version,1,8) version, substr(comments,1,20)comments from registry$history;

no rows selected

SQL> set linesize 200
SQL> set pagesize 100
SQL> select substr(action_time,1,30) action_time,substr(id,1,8) id, substr(action,1,10)action, substr(version,1,8) version, substr(comments,1,20)comments from registry$history;

no rows selected

SQL> show parameter cache_size;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
client_result_cache_size             big integer 0
db_16k_cache_size                    big integer 16M
db_2k_cache_size                     big integer 0
db_32k_cache_size                    big integer 0
db_4k_cache_size                     big integer 0
db_8k_cache_size                     big integer 0
db_cache_size                        big integer 0
db_flash_cache_size                  big integer 0
db_keep_cache_size                   big integer 0
db_recycle_cache_size                big integer 0
SQL> select tablespace_name,block_size from dba_tablespaces;

TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
SYSTEM                               8192
SYSAUX                               8192
UNDOTBS1                             8192
TEMP                                 8192
USERS                                8192
EXAMPLE                              8192
MoidTBS                              8192
TESTTBS                              8192
TS_16K                              16384
GGS_DATA                             8192

10 rows selected.

SQL>



init ora




# Change '<ORACLE_BASE>' to point to the oracle base (the one you specify at
# install time)

db_name='ORCL'
memory_target=1G
processes = 150
audit_file_dest='<ORACLE_BASE>/admin/orcl/adump'
audit_trail ='db'
db_block_size=8192
db_domain=''
db_recovery_file_dest='<ORACLE_BASE>/flash_recovery_area'
db_recovery_file_dest_size=2G
diagnostic_dest='<ORACLE_BASE>'
dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = (ora_control1, ora_control2)
compatible ='11.2.0'





[oracle@PrimeDG dbs]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sat May 7 20:49:26 2016

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMEDG (DBID=2797264308)

RMAN> configure device type disk parallelism 4 backup type to backupset;

using target database control file instead of recovery catalog
old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> backup validate check logical database;

Starting backup at 07-MAY-16
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=74 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=133 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=9 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=72 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/PrimeDG/system01.dbf
input datafile file number=00003 name=/u01/app/oracle/PrimeDG/undotbs01.dbf
input datafile file number=00008 name=/u14/oradata/PrimeDG/testtbs02.dbf
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00002 name=/u01/app/oracle/PrimeDG/sysaux01.dbf
input datafile file number=00007 name=/u14/oradata/PrimeDG/testtbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/PrimeDG/users01.dbf
channel ORA_DISK_3: starting full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00010 name=/u01/app/oracle/PrimeDG/ggs_data01.dbf
input datafile file number=00005 name=/u01/app/oracle/PrimeDG/example01.dbf
input datafile file number=00006 name=/u01/app/oracle/PrimeDG/MoidTBS01.dbf
channel ORA_DISK_4: starting full datafile backup set
channel ORA_DISK_4: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:01
List of Control File and SPFILE
===============================
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
Control File OK     0              596
channel ORA_DISK_4: starting full datafile backup set
channel ORA_DISK_4: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/PrimeDG/TS_16K.dbf
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:12
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1    OK     0              160          673             2525969
  File Name: /u01/app/oracle/PrimeDG/system01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              100
  Index      0              177
  Other      0              236

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
3    OK     0              0            639             2528079
  File Name: /u01/app/oracle/PrimeDG/undotbs01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              639

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
8    OK     0              512          639             1083162
  File Name: /u14/oradata/PrimeDG/testtbs02.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              127

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:12
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     0              347          895             2527878
  File Name: /u01/app/oracle/PrimeDG/sysaux01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              30
  Index      0              103
  Other      0              415

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4    OK     0              170          774             2494117
  File Name: /u01/app/oracle/PrimeDG/users01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              227
  Index      0              62
  Other      0              315

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
7    OK     0              768          895             1082640
  File Name: /u14/oradata/PrimeDG/testtbs01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              127

channel ORA_DISK_3: backup set complete, elapsed time: 00:00:13
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5    OK     0              129          639             2418899
  File Name: /u01/app/oracle/PrimeDG/example01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              228
  Index      0              27
  Other      0              255

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6    OK     0              512          639             1068108
  File Name: /u01/app/oracle/PrimeDG/MoidTBS01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              127

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
10   OK     0              4            646             2525643
  File Name: /u01/app/oracle/PrimeDG/ggs_data01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              385
  Index      0              75
  Other      0              182

channel ORA_DISK_4: backup set complete, elapsed time: 00:00:09
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
9    OK     0              473          640             1390551
  File Name: /u01/app/oracle/PrimeDG/TS_16K.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              100
  Index      0              0
  Other      0              67

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
List of Control File and SPFILE
===============================
File Type    Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
SPFILE       OK     0              2
Finished backup at 07-MAY-16

RMAN>


SQL> select * from v$database_block_corruption;

no rows selected

SQL> column acl format a30
SQL> column host format a20
SQL> column principal format a20
SQL> column privilege format a10
SQL> column is_grant format a8
SQL> set lines 1000
SQL> select acl, host, lower_port, upper_port from DBA_NETWORK_ACLS;

no rows selected

SQL> select acl, principal, privilege, is_grant from DBA_NETWORK_ACL_PRIVILEGES;

no rows selected

SQL> SQL> select OPEN_MODE, LOG_MODE FROM SYS.V$DATABASE;

OPEN_MODE            LOG_MODE
-------------------- ------------
READ WRITE           ARCHIVELOG

SQL> SHOW PARAMETER SPFILE;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /u01/app/oracle/product/11.2.0
                                                 /db_1/dbs/spfilePrimeDG.ora
SQL>
SQL> SHOW PARAMETER LOG_ARCHIVE_DEST

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest                     string
log_archive_dest_1                   string
log_archive_dest_10                  string
log_archive_dest_11                  string
log_archive_dest_12                  string
log_archive_dest_13                  string
log_archive_dest_14                  string
log_archive_dest_15                  string
log_archive_dest_16                  string
log_archive_dest_17                  string
log_archive_dest_18                  string

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_19                  string
log_archive_dest_2                   string
log_archive_dest_20                  string
log_archive_dest_21                  string
log_archive_dest_22                  string
log_archive_dest_23                  string
log_archive_dest_24                  string
log_archive_dest_25                  string
log_archive_dest_26                  string
log_archive_dest_27                  string
log_archive_dest_28                  string

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_29                  string
log_archive_dest_3                   string
log_archive_dest_30                  string
log_archive_dest_31                  string
log_archive_dest_4                   string
log_archive_dest_5                   string
log_archive_dest_6                   string
log_archive_dest_7                   string
log_archive_dest_8                   string
log_archive_dest_9                   string
log_archive_dest_state_1             string      enable

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_10            string      enable
log_archive_dest_state_11            string      enable
log_archive_dest_state_12            string      enable
log_archive_dest_state_13            string      enable
log_archive_dest_state_14            string      enable
log_archive_dest_state_15            string      enable
log_archive_dest_state_16            string      enable
log_archive_dest_state_17            string      enable
log_archive_dest_state_18            string      enable
log_archive_dest_state_19            string      enable
log_archive_dest_state_2             string      enable

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_20            string      enable
log_archive_dest_state_21            string      enable
log_archive_dest_state_22            string      enable
log_archive_dest_state_23            string      enable
log_archive_dest_state_24            string      enable
log_archive_dest_state_25            string      enable
log_archive_dest_state_26            string      enable
log_archive_dest_state_27            string      enable
log_archive_dest_state_28            string      enable
log_archive_dest_state_29            string      enable
log_archive_dest_state_3             string      enable

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_30            string      enable
log_archive_dest_state_31            string      enable
log_archive_dest_state_4             string      enable
log_archive_dest_state_5             string      enable
log_archive_dest_state_6             string      enable
log_archive_dest_state_7             string      enable
log_archive_dest_state_8             string      enable
log_archive_dest_state_9             string      enable
SQL>

[oracle@PrimeDG dbs]$ cp -p $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora_'date+"%Y%m%d"'
[oracle@PrimeDG dbs]$ ls -ltr
total 9600
-rw-r--r-- 1 oracle oinstall    2851 May 15  2009 init.ora
-rw-rw---- 1 oracle oinstall    1544 Mar 28 15:36 hc_DBUA0.dat
-rw-rw---- 1 oracle oinstall    1544 Mar 28 15:45 hc_PrimeDG.dat
-rw-r----- 1 oracle oinstall      24 Mar 28 15:46 lkPRIMEDG
-rw-r----- 1 oracle oinstall    1536 Apr  3 21:04 orapwPrimeDG
-rw-r--r-- 1 oracle oinstall    1009 Apr  6 08:49 initPrimeDG.ora
-rw-r----- 1 oracle oinstall    3584 May  4 22:21 spfilePrimeDG.ora_date+"%Y%m%d"
-rw-r----- 1 oracle oinstall    3584 May  4 22:21 spfilePrimeDG.ora
-rw-r----- 1 oracle oinstall 9781248 May  7 21:00 snapcf_PrimeDG.f
[oracle@PrimeDG dbs]$



cp -p $ORACLE_HOME/dbs/spfile$PrimeDG.ora $ORACLE_HOME/dbs/spfile$PrimeDG.ora_'date+"%Y%m%d"'

PrimeDG


[root@PrimeDG /]# mkdir -p /data/oracle/staging/$ORACLE_SID
[root@PrimeDG /]# chmod 755 /data/oracle/staging/$ORACLE_SID
[root@PrimeDG staging]# chown -R oracle:dba /data/oracle/staging/$ORACLE_SID
[

[root@PrimeDG staging]# su - oracle
oracle@PrimeDG ~]$ cd /data/oracle/staging/$ORACLE_SID
[oracle@PrimeDG staging]$ ls
[oracle@PrimeDG staging]$ ls -altr
total 8
drwxr-xr-x 3 root   root 4096 May  7 21:17 ..
drwxr-xr-x 2 oracle dba  4096 May  7 21:17 [oracle@PrimeDG oracle]$ . oraenv
ORACLE_SID = [oracle] ? PrimeDG
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle
[oracle@PrimeDG oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat May 7 21:20:18 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create directory migrate as  '/data/oracle/staging/$ORACLE_SID';

Directory created.

SQL> grant read,write on directory migrate to system;

Grant succeeded.

SQL>


SQL> select tablespace_name from dba_tablespaces order by 1;

TABLESPACE_NAME
------------------------------
EXAMPLE
GGS_DATA
MoidTBS
SYSAUX
SYSTEM
TEMP
TESTTBS
TS_16K
UNDOTBS1
USERS

10 rows selected.


SQL> select file_id,file_name, tablespace_name from dba_data_files;


[oracle@PrimeDG oracle]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sat May 7 21:34:16 2016

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved                                                  .

connected to target database: PRIMEDG (DBID=2797264308)



RMAN> backup incremental level 0 database plus archivelog
2> ;


Friday, May 6, 2016

finding loaction for script

/u01/app/oracle/product/11.2.0/db_1/rdbms/admin
[oracle@PrimeDG admin]$ ls -lrt | tail
-rw-r--r-- 1 oracle oinstall     338 Aug 14  2009 libnfsodm11.def
-rw-r--r-- 1 oracle oinstall      64 Aug 14  2009 libxdb.def
-rw-r--r-- 1 oracle oinstall     776 Aug 14  2009 libskgxp11.def
-rw-r--r-- 1 oracle oinstall     244 Aug 14  2009 libskgxn2.def
-rw-r--r-- 1 oracle oinstall      76 Aug 14  2009 libqsmashr.def
-rw-r--r-- 1 oracle oinstall   59631 Aug 14  2009 orasdkbase_shrept.lst
-rw-r--r-- 1 oracle oinstall  131813 Aug 14  2009 liborasdkbase.def
-rw-r--r-- 1 oracle oinstall    4687 Aug 14  2009 libasmclnt11.def
-rw-r--r-- 1 oracle oinstall     102 Dec 31 15:30 initmeta.sql
-rw-r--r-- 1 oracle oinstall     105 Dec 31 15:30 dbmssml.sql


[oracle@PrimeDG ~]$ cd /
[oracle@PrimeDG /]$ pwd
/
[oracle@PrimeDG /]$ find . -name "dbmssml.sql" -print



[oracle@CyclopDB ~]$ cp -p $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora_`date +"%Y%m%d"`


[oracle@CyclopDB ~]$ date +"%Y%m%d"
20160509
[oracle@CyclopDB ~]$ date
Mon May  9 00:11:25 EDT 2016
[oracle@CyclopDB ~]$ ls -lrt $ORACLE_HOME/dbs/spfile${ORACLE_SID}.ora_`date +"%Y%m%d"`
-rw-r----- 1 oracle oinstall 3584 May  8 23:00 /u01/app/oracle/product/11.2.0/db_1/dbs/spfileCyclopDB.ora_20160509
[oracle@CyclopDB ~]$ cd /u01/app/oracle/product/11.2.0/db_1/dbs/
[oracle@CyclopDB dbs]$ ls -altr
total 36
-rw-r--r--  1 oracle oinstall 2851 May 15  2009 init.ora
-rw-rw----  1 oracle oinstall 1544 Mar 28 16:36 hc_DBUA0.dat
-rw-rw----  1 oracle oinstall 1544 Mar 28 16:39 hc_CyclopDB.dat
-rw-r-----  1 oracle oinstall   24 Mar 28 16:41 lkCYCLOPDB
drwxrwxr-x 74 oracle oinstall 4096 Mar 28 16:51 ..
-rw-r-----  1 oracle oinstall 1536 Apr 17 13:19 orapwCyclopDB
-rw-r-----  1 oracle oinstall 3584 May  8 23:00 spfileCyclopDB.ora_20160509
-rw-r-----  1 oracle oinstall 3584 May  8 23:00 spfileCyclopDB.ora
drwxr-xr-x  2 oracle oinstall 4096 May  9 00:11 .

[oracle@CyclopDB dbs]$

Tuesday, May 3, 2016

MIGRATION

CHAPTER
2

Choosing the Right Migration Method

Chapter 1 introduced the various methods for performing data migrations. Before you can start planning your migration, you must decide on the basic method for the migration. By looking at your requirements and determining which method best meets your needs, as discussed in this chapter, you will be able to create a plan that is right for you.

EVALUATING YOUR ENVIRONMENT

There is no one migration path that is right for everybody. You should choose the migration path that best meets your particular requirements. We have performed many different data migrations using different methods. In each case, we determined which method to use based on an evaluation of the following factors:
images   Migration goals
images   Downtime requirements
images   Equipment and infrastructure availability
images   Budget
images   Available resources
After assessing these factors, we are able to decide which methods are available and which one best meets the specific requirements. The methods and corresponding examples that we present in this book are intended to serve as guidelines that you can use to make your own choice after assessing the preceding factors in your own environment. You might choose one of these methods, a combination of these methods, or even a method that you devise for your own particular requirements.
Our goal in this book is not to tell you which method is right for you; instead, we intend to give you solid ideas and examples upon which you can start to build your own project.
Image
TIP & TECHNIQUE
No two migrations are the same. Every customer is different, and every migration is different. When deciding how to migrate your data, don’t try to fit your plan into any of these methods exactly. Your specific needs will be different from everybody else’s. This book is designed to provide you with ideas. Take these ideas and run with them. Think outside the box and do what is right for your situation.

Migration Goals

The first and most important factor in determining the best migration strategy for you is identifying what your migration goals are. Why do you want to migrate? This might seem like a very simple question, but details are important. So, let’s look at several common reasons for performing a data migration. The following list is by no means comprehensive. Many other reasons exist for performing data migrations.
images   Hardware refresh   Perhaps your hardware is out of date and you want to upgrade it to a newer, faster system. This is one of the most common reasons for performing a data migration. If you are migrating to the same hardware architecture the options and methods that are available will be different than those available if you are going to different hardware architecture, such as migrating from Oracle on AIX to Oracle on Linux or to Oracle Exadata.
images   Support requirements   Many times migrations/upgrades are dictated by support requirements. Oracle support for very old versions of the database eventually ends. In addition, Oracle versions are tied to OS versions, which in turn lose support. Extended support can be purchased for certain time frames for a price, but eventually you must consider migrating/upgrading to a newer version of software to remain supported.
images   Consolidation   As hardware becomes faster and more efficient, and supports more and more memory, you will often find that it is appropriate to consolidate several databases into one database. With Oracle Database 12c, you can now consolidate into a multitenant database using Oracle’s pluggable databases.
images   Security requirements   Recently we worked on a database migration/upgrade project where the motivating factor was to be able to encrypt the database using tablespace encryption using a Hardware Security Module (HSM). It is not uncommon for databases to be migrated and upgraded to support better security. In addition, security is updated on a quarterly basis via Oracle Critical Patch Updates (CPUs).
images   Performance/new features   One of the primary reasons for migrating to a new database is to enable new database features. Each release of the Oracle database introduces a variety of new features that are designed to enhance performance and scalability.
These new features will vary by database version and your needs. Depending on the type of features that you want to take advantage of, the method of migration might be limited.
Image
TIP & TECHNIQUE
Determining the migration goals is important to making the overall decision as to how you are going to do the migration.
It is important to first determine all of the reasons for migrating the database before you begin the migration plan. The reasons to migrate your database will very often determine the migration methods that are available to you. As with any task, preparation and planning is key to a successful migration.

Downtime Requirements

Determining allowable downtime is another extremely important part of choosing the appropriate migration process. This requirement will determine which migration methods are viable. If you are allowed plenty of downtime and have a small database, the number of options is unlimited. Unfortunately, in many cases the amount of downtime is very limited and the size of the database is very large.
The downtime requirements will be determined by the business unit and will usually be very strict, especially if formal service-level agreements (SLAs) are in place. In several of the recent migrations that we have worked on, the downtime requirements were limited to just a few hours, whereas the database size was over 10TB. This type of scenario can limit your options to a very few. Other requirements will further limit the options that you have for your migration method.
So, what is considered downtime and what will affect that downtime? For our purposes in this book,downtime is considered the length of time the database is unavailable to the application servers in order to get a consistent copy of the database. Keep in mind that the methods provided in this chapter have many variations.

Transformation Goals

The transformation goals are also very important to determining the available migration methods. So, what is a transformation goal? How is a transformation different from a migration? In the simplest of terms, if you are changing the structure of the database, you are performing a transformation in addition to a migration. In this book, we pretty much call everything data migration (except for a pure upgrade).
The simplest of migration goals is to move the database from one system to another system. This is very simple and straightforward. However, many times a data migration will offer additional benefits, such as the opportunity to take advantage of new features, enhance the database, and make overall database improvements.
Here are some examples of transforming a database during a migration:
images   Upgrading from one version of Oracle to a newer version. This technically does not constitute a transformation, but rather an upgrade within the data migration.
images   Moving from BasicFile LOBs to SecureFile LOBs. SecureFile LOBs are first available in Oracle Database 11g.
images   Moving from range partitioning to interval partitioning. Interval partitioning was introduced in Oracle Database 11g.
images   Encrypting the database.
images   Compressing the database.
images   Moving from a non-container database to a container database with pluggable databases (first available in Oracle 12c).
images   Partitioning the database.
images   Moving to Oracle Automatic Storage Management (ASM) from non-ASM storage.
During a data migration is possibly the best time to introduce these new features.
Image
TIP & TECHNIQUE
When you are doing any type of migration, you should carefully consider if you can enable new features and/or restructure your database. Often, your best opportunity to modify the database is during a migration.
Operational Goals
Operational goals are simply additional functionality and/or features that are needed to meet business goals. For example, your IT organization may have mandated that as part of its ongoing plans to improve its business continuity requirements, a more robust disaster recovery plan must be implemented before cutover to the new production environment, so this would be an excellent opportunity to implement Oracle Data Guard at the corresponding disaster recovery site.

Migration Methods Review

As mentioned in the previous chapter, there are several different methods for migrating a database. Each of these methods has its pros and cons, as outlined in Table 2-1. The key factor for most of these methods is how much time it takes to perform the data migration.
Image
Image
TABLE 2-1. Migration Options
In this book there are many migration methods that are mentioned that are not necessarily covered in detail. This is due to the fact that many methods are subsets of each other. Therefore, you will not see a one-to-one relationship between this section and the rest of the book. In fact, many of the options listed in Table 2-1 are not entirely suited for an Oracle migration.
Depending on your requirements, not all of these methods may work for you. In this book we will be focusing on a few methods and variations of these methods. The key methods that we will be covering are described in the following sections, along with some of their variations.
Database Upgrade Assistant (DBUA)
DBUA can be used to upgrade a database to the next major release. Using this tool can be time-consuming, and it modifies your production database. You must back up your system (as with all migration methods) before the upgrade, and the only way to back out the upgrade is to restore the original version of the database from its prior backup. DBUA performs an in-place upgrade, and it only performs upgrades, not migrations.
Image
NOTE
Using the DBUA is the only method that we cover in this book that does not perform a migration, only an upgrade. This upgrade is done in-place.
Export/Import
The traditional Export/Import utility can still an important part of any data migration. Even if this migration method is not used as the primary method, in many cases it is used to export and import metadata such as user schemas and so forth.
This method can be used in conjunction with GoldenGate to create schema objects that are later loaded into via Export/Import or Data Pump followed by GoldenGate replication.
Image
NOTE
Traditional Export/Import is desupported starting with Oracle Database 11g.
Data Pump
Data Pump is similar to Export/Import but is much faster. It can be used to create objects and to efficiently move data. Data Pump is very fast at moving traditional data, but transfer speeds for BasicFile LOB data tend to be dramatically slower than when transferring SecureFile LOB data.
Depending on the type of data being transferred, network throughput, and so on, it is not unusual to realize transfer speeds of over 100GB/hour using Data Pump.
RMAN
Oracle Recovery Manager (RMAN) is generally recognized as the de facto utility for quickly backing up and restoring a database and its data, but even though it is not specifically designed for data migration, it can be used for that purpose.
RMAN can perform both full and incremental backups. Using an incremental backup, you can reduce the length of time that the database is down during cutover to the length of time it takes to complete the incremental backup and restore.
While this method does not allow for database changes during migration, it definitely supports some basic storage changes such as tablespace relocation and endian/character set conversion.
Transportable Tablespaces
Using transportable tablespaces can be an efficient way of performing a database migration. The process is fairly fast, taking just a little longer than the time it takes to copy the tablespace’s datafiles between source and target environments.
However, this method does not allow for database changes during the migration.
Cross-Platform Tablespace Migration with Incremental Backups
This is one of the primary methods used (and recommended) for migrating to Exadata systems. It essentially combines transportable tablespace methods with incremental RMAN backup methods to perform the data synchronization.
While this method does require some downtime, that downtime will be limited to the time it takes to perform the final incremental backups.
Data Guard
In certain specific cases, Oracle Data Guard can be used to migrate a source database to a different system with a similar architecture. Simply set up a physical standby database on the new system, and when you are ready to cut over, perform a Data Guard switchover operation. An additional benefit of this method is that you are left with a ready-made physical standby.
This method does not allow for database changes during the migration, and it does require the same hardware architecture on the source and target.
GoldenGate
Oracle GoldenGate is an excellent product for keeping systems in sync for a quick switchover. Since GoldenGate is a replication technology, it allows only specific tables to be replicated instead of the entire database, and that can reduce the time required to complete the cutover significantly.
GoldenGate can be used to perform migrations that include data structure changes and transformations. When used in conjunction with Oracle Flashback Database features, the target database’s performance can be tested prior to the cutover. Using GoldenGate for data migration offers many great features which we will describe later in the book.

Resources

When you consider which human resources are necessary for a data migration, you need to take into account several different factors, including the number of people needed, their skill levels, their experience, and their availability. These factors are all extremely important to performing the data migration successfully. If you don’t have enough of the right resources, the project is doomed to be delayed or suffer serious setbacks.
So, what kind of resources are required? Since many different tasks need to be performed for a smooth migration, you will need a wide range of resources, including the following:
images   Project management   Without proper project management resources, it will be very difficult to keep track of and maintain the schedule. A professional PM is absolutely needed for all but the simplest and smallest data migrations. In addition, the PM can assist with the mediation of issues that can occur before, during and after the migration.
images   Team leadership   There must be a clear chain of command to be able to guide the project. Depending on the size of the migration, there will most likely be several teams of engineers, but they all should report to one project lead.
images   Oracle resources   These resources must be familiar with Oracle DBA tasks and must be able to modify and/or code stored procedures and SQL scripts. They must be familiar with tools such as Export/Import and Data Pump as well as other data transport methods. They must be very familiar with the database configuration and usage as well as with the data itself.
images   GoldenGate resources   If you are using GoldenGate for your migration method, you must have one or more skilled GoldenGate resources. Having someone who has had experience with this type of migration is important.
images   Network resources   Any migration involves not only moving databases from one system to another, but moving application servers as well. In almost any migration, this involves opening firewalls, copying data between systems, and allowing user access, and this typically means there will be a need to modify the network configuration.
images   System administrators   Every migration involves both database resources and OS resources. This means it is necessary to either have your DBAs perform double-duty as system administrators (if they have the skills) or have dedicated system administrators.
images   Storage administrators   Since every database requires storage, storage administrators must be included as part of the migration project team.
images   Quality assurance and testing resources   Before putting any new system into production, it is crucial to test it to make sure that all of the applications work properly. This involves QA analysts and testing resources. If Oracle testing tools such as Real Application Testing and Database Replay are going to be used, it’s important that the QA and testing resources are familiar with how those tools will be used.
As you can see, for moderate to large migration projects, many people will be involved.
Image
TIP & TECHNIQUE
Don’t be hesitant to bring in extra help if you need it. Getting the help and expertise that you need to be successful is far better than trying to get by with available resources and failing.
In-House Resources
If the resources listed previously are available in house, that’s always a good choice. In order to use in-house resources, you must make sure that they are available full-time for the migration project. If the resources are not able to be dedicated to the migration project and they are the only resources available, this often leads to delays and/or project failures. Dedicated resources are simply necessary for project success.
Additional Resources
If you are going to bring in outside resources to help with the data migration, you will need to make sure that the resources are experienced with this type of migration and have the right type of skills that are needed for the project. Additional resources are a great way to help with the database migration project because you can often find people who have already experienced the type of migration that you are performing.
There are many skilled resources or companies that you can bring on board to help with your data migration. Depending on the method that you have chosen, there are organizations that are skilled and experienced in every type of data migration. Engaging outside resources that can bring specific skills and experience to the table will increase your chance of success.

Budget

Probably the biggest deciding factor of which migration method you choose is the project’s budget. The amount of time it takes to perform the migration depends on the number of staff dedicated to the project as well as any additional software and hardware resources. These all relate back to budget.
No data migration is free; there is always a cost, sometimes significant. If additional software is required, such as Oracle GoldenGate, licenses must be acquired. If additional hardware is required, both hardware and, potentially, additional licenses must be purchased.
Unfortunately, we have often seen a migration project’s budget be severely underestimated, and as a result the migration ends up exhausting the planned budget. It is therefore crucial that the project’s budget does take into account the project’s migration goals, human resources, licensing, and hardware costs.
Additional Software
As mentioned previously, almost every migration involves the purchase (or exchange) of software licenses. When you are moving a database from one system to another system, the database licenses will most likely change, especially if the number of CPUs or cores changes. If you are migrating from one version of software to a newer one, the licensing cost will most likely change.
Database licenses are a complex issue and we will not attempt to explain them here; we simply stress that software licenses are an important component of the migration budget and their costs must be researched. Consult your software vendor for licensing costs early in the budgeting process.
Equipment Resources
Almost every data migration is done for the purpose of moving to new hardware. The new hardware may or may not match the same architecture and OS of the original system. If it does match, the migration will be much simpler, of course, but in many cases one of the primary goals of the migration is to move from one architecture to another, for any number of reasons.
Depending on whether or not the new systems are of the same hardware architecture and OS, the number of migration options will also vary. The number and type of CPU cores will dictate licensing costs; in addition, the new OS/hardware environment will most likely require additional training and staffing needs. For example, if you are migrating from Microsoft SQL Server to Oracle, you might not have the in-house resources to manage the new system without hiring additional help.
Infrastructure
Infrastructure is an important part of every migration. If you are adding new systems, you must provide sufficient network and storage connectivity to those new servers. Necessary infrastructure includes
images   Rack space
images   Power
images   Cooling
images   Network cabling
images   Storage cabling
images   Network switch space
images   Storage switch space
images   Other components, such as routing, firewalls, and so forth
Without proper infrastructure planning, your migration efforts are likely to run into unexpected problems during a crucial project phase. Remember that when migrating data, the speed of the network between the source of the migration and the target of the migration is important.
Proper infrastructure planning usually involves including the systems administration, network administration, and storage administration teams early in the planning phase alongside the database and application server groups. Our experience has shown that combining forces as early as possible during the planning stage guarantees the project will run more smoothly and efficiently.

Additional Considerations

In addition to the requirements covered previously, there are a number of other requirements that will further limit which migration methods are available in a given scenario. These include data types, database features, and so forth. If the structure of the database or the data is going to change, some options such as Data Guard cannot be used because they require an identical database on the target system.
Data Types
Typically a migration from Oracle to Oracle does not involve data type changes, unless of course changing data types is part of the reason for the migration. For example, when data type conversion from BasicFile LOBs to SecureFile LOBs is a project requirement, there can be a significant impact on project timelines, especially for testing the results of the conversion.
When moving from SQL Server to Oracle or from other databases to Oracle, it is often necessary to change data types. This is important because it will limit the choices of data migration methods. The fewer changes that will be made to the target database, the more options will be there are for data migration.
Database Features
Often data migrations are performed in order to be able to take advantage of new database features. In the past few years Oracle has added many new features that make a database upgrade worthwhile, especially when one or more new features enhance both application functionality and performance. These features might include the following:
images   Partitioning   Oracle is constantly working to improve partitioning. In recent releases Oracle has introduced more compound partitioning types as well as new types such as interval partitioning.
images   LOBs   SecureFile LOBs are much more efficient and extremely fast as compared to legacy LOBs. We have migrated databases and converted to SecureFile LOBs on several occasions.
images   Security   When migrating to a new platform, it is often convenient to enhance the security model of the database. For example, during a recent upgrade/migration, we implemented encrypted tablespaces.
images   Storage   We have found that a great time to move from non-ASM to ASM storage is during a migration, since this provides all of the additional benefits of ASM storage technology.
As discussed earlier, there are a lot of other reasons to migrate data, the most common of them being a hardware refresh.
Infrastructure Capabilities
During the data migration itself, it’s important not to neglect existing infrastructure capabilities. If you do not have a sufficiently fast network as well as storage of sufficient capacity and throughput, the data migration will suffer in terms of schedule and efficiency. The infrastructure will be the conduit through which the data moves from the source to the target system. The speed will vary depending on the speed of the network and the number of components that the data must travel through.

MAKING YOUR DECISION

So, how do you decide on the type of data migration that you want to perform? There are a few major factors and a number of minor factors that have been discussed in this chapter that will absolutely affect the type of database migration that you will be performing. The biggest of these factors is the downtime requirement. Because downtime requirement will vary by organization, it is important to lock down your specific needs. Let’s take a look at how some of the major requirements can dictate your migration method. As always, everybody’s requirements are different. Now that you are familiar with the major factors that impact your choice of database migration, it’s time to make a choice by matching your specific requirements with a particular method.

How Downtime Affects Your Choice of Method

The downtime factor is very important. The shorter the downtime that you are allowed, the fewer migration options that are available to you. The migration methods in Table 2-2 are listed in the approximate order of slowest to fastest. This list takes into consideration the amount of time that the applications will be unavailable during the cutover. Note that these methods do not stand alone—as you have seen earlier in this chapter, there are many variations on these methods.
Image
TABLE 2-2. Options Listed by Downtime
So, if your applications are restricted by a very short time frame for cutover, your options are limited to Data Guard or GoldenGate.
Image
NOTE
GoldenGate requires an initial migration, which might be accomplished using any of the other methods.

How Versions Affect Your Choice of Method

The target Oracle version is another important factor in choosing a migration method. As Table 2-3indicates, if the target of your migration is a different Oracle version, you can rule out using RMAN or Data Guard for the initial migration.
Image
TABLE 2-3. Migration Options Versus Database Version Consideration
So, if you wish to modify the target database, you are limited to Export/Import, Data Pump, or GoldenGate. You can start to see how your decision might be made for you.

How Architecture Affects Your Choice of Method

There might be other factors involved in your decision-making process besides the downtime requirement and versioning requirement. One of these might be the architecture of the system. If you are moving to a different architecture, your options might be limited by that choice (see Table 2-4).
Image
TABLE 2-4. Impact of Architecture Transition Upon Migration Methods

How Database Transformation Affects Your Choice of Method

A database transformation occurs when you change database objects during your migration. This is a very common occurrence, and the perfect time to transform the database is when performing a migration. The following kinds of transformations are possible:
images   Partitioning   Adding partitions, converting to a different partitioning type, and so forth. This is a very common type of transformation in data migrations. During the migration itself is the best time to actually add partitioning.
images   Security   Adding encryption or a Hardware Security Module (HSM). Since adding encryption, such as encrypted tablespaces, usually requires rebuilding the tablespace structure, a migration is ideal for this type of operation.
images   LOB transformation   Changing to SecureFile LOBs. SecureFile LOBs are a new way to store LOB data, introduced in Oracle Database 11g. The performance of SecureFile LOBs far exceeds the performance of traditional LOB data.
images   Compression   Compressing tables or tablespaces. This can be done either within a migration or as a stand-alone task. However, compressing during a migration can be much more efficient.
images   Other   There are a number of other features that you might enable as part of the migration; some features require data transformation, while others do not.
Whether each method allows for transformation is shown in Table 2-5.
Image
TABLE 2-5. Options Listed by Whether Transformation Allowed
Image
TIP & TECHNIQUE
There are various techniques for transforming the database during a migration. They are discussed throughout this book.

Decision Summary

As you have seen, there are a number of factors that will dictate which methods are available for you to choose. Listing the factors in a table provides an easy way to see what migration methods might be eliminated, as shown in Table 2-6.
Image
TABLE 2-6. Migration Summary
Now you can see how your decision is limited based on your specific criteria for migration. Depending on your requirements, the choice of migration method becomes more obvious. Of course, your specific requirements might allow for different combinations of these methods.

HYBRID SOLUTIONS

The solutions listed previously are not exclusive of each other. In fact, several of the solutions can be used together. This is especially true of Oracle GoldenGate. Oracle GoldenGate is a replication tool. It includes a facility to perform the initial data migration, but that facility is not extremely efficient. When we perform a GoldenGate migration, we combine it with other methods for doing the initial setup and migration. Let’s look at an example of this.

GoldenGate Migration Example

The basic idea of using GoldenGate for a data migration is to start replicating data, perform the initial migration, and then sync up the changes that have occurred since the point in time that the initial migration occurred. Let’s look at these steps in a little more detail:
1.   Start replication. To start replication, GoldenGate
extract processes mine transactions from the redo log. Once REPLICAT processes are running, all changes are being saved.
2.   Initial migration. There are a number of options to perform the migration. You can use Export/Import, Data Pump, hardware cloning, transportable tablespaces, and so forth.
Tablespaces were created with encryption. This encryption was set up using a hardware security module.
Once the tablespaces were created, Export/Import was used to create the schema objects.
The schema was then modified to include transformations such as SecureFile LOBs, partitioning, and so forth.
The data was then migrated using an activated physical standby via Flashback Database.
3.   Sync up. The data was synced up by enabling GoldenGate REPLICAT processes. The REPLICATs were started using the SCN that matches when the guaranteed restore point was created. This gave us the last transaction on the source database.
Once the data was synced up, both the source and target had the same data, even though they were different versions of Oracle existing on different hardware architectures and for which several database objects had been transformed.
At this point, we were then able to test the applications, perform final QA verification, and then move users to the new system.
As you can see, this is a very elegant solution using three migration methods: Export/Import, Data Pump, and GoldenGate. More details are provided later in this book.

SUMMARY

As you have seen in this chapter, there is no one migration path that is right for everybody. You must customize your own migration path based on your needs. There are many different methods of database upgrade, migration and transformation. Each method is chosen based on your needs, which could include
images   Source and target versions
images   Transformation requirements
images   Schedule
images   Equipment and infrastructure availability
images   Budget
images   Available resources
All of these factors are used to determine the particular method that you need for your specific project. No two projects are exactly alike. The different methodologies and technologies can be combined to create a plan that best meets your specific needs. The methods that we have provided you with are a guideline that is used along with some examples. You might choose one of these methods, or a combination of these methods or even develop your own unique method.
Based on your requirements, your choices might be limited; however, there are always combinations that might work as well. Hybrid solutions of multiple methods usually are the best way to go.