My current customer has quite some 12.1 databases on Exadata in their (BI) landscape.

With multitenant the paradigm changed from severals Schema/Applications in a Database to each schema it’s PDB.

With that in mind I was asked to split several multi Terrabyte databases into separate PDB’s.`

We went the Fully Transportable Tablespace Route, as describe in My Oracle Support Note

How to Create Transportable Tablespaces Where the Source and Destination are ASM-Based (Doc ID 394798.1)


 Note :

Tablespaces must be read only to check if they are self contained for versions prior to 19c, from 19c on this isn’t the case anymore Mike Dietrich has a post about this here https://mikedietrichde.com/2020/10/02/test-transportable-tablespaces-without-read-only-mode/

The above procedure works flawlessly only this doesn’t scale for bigger db’s,  :

  • the tablespaces need to stay read only while the copy is made. 
  • time to transfer over the network just takes too long

So what I though was following, what if I could use Data Guard to help me out, in the end what datapump is interested in is datafiles and SCN, so Data Guard to the rescue.

On the database host where my new db will reside I create a standby. 

High level steps :

  1. On the database host where my new db will reside I create a standby. 
  2. create destination 19c PDB
  3. put tablespaces you want to transport in read only and check if selfcontained
  4. export the metadata with all the necessary excludes
  5. disable the apply on the standby
  6. put tablespaces back read write on the primary.
  7. shutdown and disable the standby startup.
  8. impdp the metadata in the new pdb, using the file locations of the standby to win time to not copy, so it is crucial that standby stays shutdown
  9. check for invalid objects and fix
  10. encrypt the imported tablespaces or move the datafiles ( this will put them in the correct file structure in the PDB)

Lets dig deeper I will not go over all the steps because it might be environment specific 
for example I had a table space on my source which was already TDE encrypted, so I need to export the key of the old 12.1 db and import it back in the target PDB ( important to do it there otherwise you will have issues).

But lets go over the export and import phase.

Export phase

Here is an obfuscated parfile :

directory=tts_dump

dumpfile=exp_tts.dmp

logfile=tts_exp.log

transportable=always

full=y

metrics=y

version=12

exclude=table_statistics,index_statistics

exclude=tablespace:”IN (‘USERS’,’UNDOTBS4′,’UNDOTBS3′)”

exclude=SCHEMA:”IN (APPB,’APPC’,ORACLE_OCM’,’GSMADMIN_INTERNAL’,’DBSNMP’,’OJVMSYS’,’XDB’,’WMSYS’)”

exclude=USER:”IN (PFIERENS)”

encryption_password=”<PASSWORD_TO_ENCRYPT_THE_DMP_FILE>”



The export In this case took almost 4minutes


Datafiles required for transportable tablespace TBS1_ETBS:

  +DATAC99/PRIMARY/DATAFILE/TBS1_ETBS.375.1072343983

Datafiles required for transportable tablespace TBS2:

  +DATAC99/PRIMARY/DATAFILE/TBS2.413.1072343973

Job “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully completed at Wed Jun 9 09:27:09 2021 elapsed 0 00:03:43




The import par file looks like this

DIRECTORY=tts_dump

DUMPFILE=exp_tts.dmp

LOGFILE=tts_dump_log:ttsimport_tts.log

metrics=yes

TRANSPORT_DATAFILES=‘+DATAC99/STANDBY/DATAFILE/TBS1_ETBS.914.1074269403’,‘+DATAC99/STANDBY/DATAFILE/TBS2.280.1074269351’

encryption_password=“<PASSWORD_TO_ENCRYPT_THE_DMP_FILE>”

note that the Datafiles refer to the standby ones !! so make sure you don’t start the standby again

the import  it self looks like this :

Import: Release 19.0.0.0.0 – Production on Wed Jun 9 09:30:17 2021

Version 19.9.0.0.0

 

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

;;;

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production

;;; **************************************************************************

;;; Parfile values:

;;;  parfile:  encryption_password=********

;;;  parfile:  transport_datafiles=+DATAC99/STANDBY/DATAFILE/TBS1_ETBS.914.1074269403,

;;;  _parfile: +DATAC99/STANDBY/DATAFILE/TBS2.280.1074269351,

;;;  parfile:  metrics=Y

;;;  parfile:  logfile=tts_dump_log:ttsimport_tts.log

;;;  parfile:  dumpfile=exp_tts.dmp

;;;  parfile:  directory=tts_dump

;;;  parfile:  userid=system/********@mynewpdb.acme.com

;;; **************************************************************************

W-1 Startup took 1 seconds

W-1 Master table “SYSTEM”.“SYS_IMPORT_TRANSPORTABLE_01” successfully loaded/unloaded

Starting “SYSTEM”.“SYS_IMPORT_TRANSPORTABLE_01”:  system/********@mynewpdb.acme.com parfile=imp_tts.par

W-1 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER

W-1      Completed 1 MARKER objects in 1 seconds

W-1 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER

W-1      Completed 1 MARKER objects in 2 seconds

W-1 Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK

W-1      Completed 1 PLUGTS_BLK objects in 2 seconds

W-1 Processing object type DATABASE_EXPORT/TABLESPACE

ORA-31684: Object type TABLESPACE:“TEMP_ETL” already exists

W-1      Completed 1 TABLESPACE objects in 0 seconds

W-1 Processing object type DATABASE_EXPORT/PASSWORD_VERIFY_FUNCTION

 

 

….

 

 

W-1      Completed 1358 AUDIT objects in 4 seconds

W-1 Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER

W-1      Completed 1 MARKER objects in 2 seconds

W-1      Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds

W-1      Completed 41 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 2 seconds

W-1      Completed 10 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 22 seconds

Job “SYSTEM”.“SYS_IMPORT_TRANSPORTABLE_01” completed with 1598 error(s) at Wed Jun 9 09:35:57 2021 elapsed 0 00:05:36


So that took 5 minutes there are 1598 error but they are due to excluded users everything was nicely imported

In total for 1TB it took me 8m43 seconds of downtime + the TTS check. The time is not really related to the size but more to the number of objects and partitions … to import

The datafiles are now still under the structure of the Standby 12.1 db this of course is not the goal,

in my case I needed to encrypt the tbs I did this in an online fashion or you can do alter database move datafile ‘XXXX’ to +DATAC99 


I could use this technique because the application in the database knows which files to fetch when it is shutdown if that is not the case then probably the fully transportable tablespaces with incremental backup method (xTTS) is better suited for you.

Check out Mike Dietrichs blog for that 

https://mikedietrichde.com/2019/01/28/different-mos-notes-for-xtts-perl-scripts-use-v4-scripts/

I would like to thank Pieter Van Puymbroeck  and Mike Dietrich for their time and support when I pitched the idea.

Leave a Reply