Server Result Cache
* Enables query result to be cached in memory which can be used during future execution of a similar query by bypassing the regular processing thereby returning the results faster.
* Decreases the wait time for both physical and logical IO by directly fetching the results from the cached memory.
* Cached result set is completely shareable between the sessions and various statements as long as they share a common execution plan.
* Server result cache is the new component of SGA that caches results of queries and is managed by automatic memory management.
* New parameter RESULT_CACHE_MAX_SIZE is used to enable result cache by setting the maximum size of the cache.
* A new optimizer hint allows use of result cache at the query level.
Query execution without result cache hint.
Server Result Cache
Query execution without result cache hint - Oracle 11gQuery execution without result cache hint – Oracle 11g
Query execution with result cache hint.
Query execution with result cache hint - Oracle 11gQuery execution with result cache hint – Oracle 11g
Parameters related to Result Cache
RESULT_CACHE_MAX_RESULT : specifies the percentage of RESULT_CACHE_MAX_SIZE that any single result can use
RESULT_CACHE_MAX_SIZE : specifies the maximum amount of SGA memory (in bytes) that can be used by the Result Cache.
RESULT_CACHE_REMOTE_EXPIRATION : specifies the number of minutes that a result using a remote object is allowed to remain valid
RESULT_CACHE_MODE : specifies when a ResultCache operator is spliced into a query’s execution plan.
How to find result cache information:
V$RESULT_CACHE_STATISTICS
V$RESULT_CACHE_MEMORY
V$RESULT_CACHE_OBJECTS
V$RESULT_CACHE_DEPENDENCY
DBMS_RESULT_CACHE – PL/SQL API for result cache management:
Functions : Status – displays the current status of the result cache.
SELECT DBMS_RESULT_CACHE.status FROM dual;
STATUS
—————————————————–
ENABLED
Flush: remove all objects from the result cache and release memory.
I will cover some more features in next post . Keep reading
Saturday, September 18, 2010
Tuesday, August 10, 2010
Oracle WebLogic Server Basic Concepts
Check out this SlideShare Presentation:
Oracle WebLogic Server Basic Concepts
View more presentations from jambay.
Sunday, August 1, 2010
Oracle GoldenGate - Positioning a Read of Trail Files or Oracle Redo Log
Positioning in Extract / Replicat Trail and Log
In the event that there is ever a need to position an extract in the
Transaction Log (aka redo log), extract trail file or replicat trail file,
the following options can be used:
INFO EXTRACT, DETAIL
This will name your current redo log along with the RBA and sequence number and your extract trail name along with the RBA. RBA is the relative byte address of the record in the trail file at which the checkpoint was made
INFO EXTRACT, SHOWCH
Will show you your read checkpoint in the data source and write
checkpoint in the trail files.
Log Read Checkpoint File /orarac/oradata/racq/redo01.log ß- Oracle redo
2006-06-09 14:16:45 Thread 1, Seqno 47, RBA 68748800 info
Log Read Checkpoint File /orarac/oradata/racq/redo04.log
2006-06-09 14:16:19 Thread 2, Seqno 24, RBA 65657408
Current Checkpoint Detail:
Read Checkpoint #1
Oracle RAC Redo Log
Startup Checkpoint (starting position in data source):
Sequence #: 47
RBA: 68548112 ß– RBA offset of entry in redo log
Timestamp: 2006-06-09 13:37:51.000000
SCN: 0.8439720
Redo File: /orarac/oradata/racq/redo01.log
Recovery Checkpoint (position of oldest unprocessed transaction in
data source):
Sequence #: 47
RBA: 68748304
Timestamp: 2006-06-09 14:16:45.000000
SCN: 0.8440969
Redo File: /orarac/oradata/racq/redo01.log
Current Checkpoint (position of last record read in
the data source)
Write Checkpoint #1
GGS Log Trail ß– start of GG Trail Information
Current Checkpoint (current write position):
Sequence #: 2
RBA: 2142224 ß– RBA offset of entry in Trail file
Timestamp: 2006-06-09 14:16:50.567638
Extract Trail: ./dirdat/eh
Header:
Version = 2
Record Source = A
Type = 6
# Input Checkpoints = 2
# Output Checkpoints = 1
Once you have your checkpoint RBAs, you can use a few ggsci
commands to help you debug:
ADD EXTRACT Allows creation of an extract from a specific position in a trail file or
transaction log.
Some useful options are:
EXTTRAILSOURCE Specifies a trail as the data source. For , specify the
fully qualified path name of the trail, for example c:\ggs\dirdat\aa.
BEGIN
Specifies a timestamp in the data source at which to begin
processing.
value is either:
□ NOW - the time at which the ADD EXTRACT command
is issued.
□ A date and time in the format of:
yyyy-mm-dd [hh:mi:[ss[.cccccc]]]
*** if you have a 4-node RAC cluster environment, use the
“THREADS 4” option to any command to which it applies.
EXTRBA
Specifies an RBA at which to start extracting. This can be used to
skip over a bad entry in a trail file.
ALTER EXTRACT
Allows changing the attributes of an extract file created by the ADD EXTRACT
command and allows the incrementing of an extract to the next file in the sequence.
***Always “STOP EXTRACT” before using this command.
You can use this command to make any changes using any of the options to the ADD EXTRACT command (above). So, for example, you can ALTER EXTRACT an extract file to begin at a specific RBA for skipping over an entry in the extract file.
Ex.: ALTER EXTRACT finance, EXTSEQNO 26, EXTRBA 338
You can change any of the attributes specified with the ADD EXTRACT command, except for the following:
□ Altering an Extract specified with the EXTTRAILSOURCE
option.
□ Altering the number of RAC threads specified with the THREADS
option.
SEND EXTRACT Allows you to send commands to a running extract process.
Some useful options are:
GETLAG shows lag time between the extract and the data source
LOGEND shows whether or not extract has processed all record in
the data source
ROLLOVER makes extract increment to the next file in the trail upon
startup
SHOWTRANS shows information about current transactions:
□ Process Checkpoint
□ Transaction ID
□ Extract Group Name
□ Redo Thread Number
□ Timestamp of first transaction of extract
□ System Change Number (SCN)
□ RBA and Redo Log Number
□ STATUS ( commit after it has been forced by FORCETRANS
(Pending Commit) or “running”)
In the event that there is ever a need to position an extract in the
Transaction Log (aka redo log), extract trail file or replicat trail file,
the following options can be used:
INFO EXTRACT
This will name your current redo log along with the RBA and sequence number and your extract trail name along with the RBA. RBA is the relative byte address of the record in the trail file at which the checkpoint was made
INFO EXTRACT, SHOWCH
Will show you your read checkpoint in the data source and write
checkpoint in the trail files.
Log Read Checkpoint File /orarac/oradata/racq/redo01.log ß- Oracle redo
2006-06-09 14:16:45 Thread 1, Seqno 47, RBA 68748800 info
Log Read Checkpoint File /orarac/oradata/racq/redo04.log
2006-06-09 14:16:19 Thread 2, Seqno 24, RBA 65657408
Current Checkpoint Detail:
Read Checkpoint #1
Oracle RAC Redo Log
Startup Checkpoint (starting position in data source):
Sequence #: 47
RBA: 68548112 ß– RBA offset of entry in redo log
Timestamp: 2006-06-09 13:37:51.000000
SCN: 0.8439720
Redo File: /orarac/oradata/racq/redo01.log
Recovery Checkpoint (position of oldest unprocessed transaction in
data source):
Sequence #: 47
RBA: 68748304
Timestamp: 2006-06-09 14:16:45.000000
SCN: 0.8440969
Redo File: /orarac/oradata/racq/redo01.log
Current Checkpoint (position of last record read in
the data source)
Write Checkpoint #1
GGS Log Trail ß– start of GG Trail Information
Current Checkpoint (current write position):
Sequence #: 2
RBA: 2142224 ß– RBA offset of entry in Trail file
Timestamp: 2006-06-09 14:16:50.567638
Extract Trail: ./dirdat/eh
Header:
Version = 2
Record Source = A
Type = 6
# Input Checkpoints = 2
# Output Checkpoints = 1
Once you have your checkpoint RBAs, you can use a few ggsci
commands to help you debug:
ADD EXTRACT
transaction log.
Some useful options are:
EXTTRAILSOURCE
fully qualified path name of the trail, for example c:\ggs\dirdat\aa.
BEGIN
Specifies a timestamp in the data source at which to begin
processing.
□ NOW - the time at which the ADD EXTRACT command
is issued.
□ A date and time in the format of:
yyyy-mm-dd [hh:mi:[ss[.cccccc]]]
*** if you have a 4-node RAC cluster environment, use the
“THREADS 4” option to any command to which it applies.
EXTRBA
Specifies an RBA at which to start extracting. This can be used to
skip over a bad entry in a trail file.
ALTER EXTRACT
Allows changing the attributes of an extract file created by the ADD EXTRACT
command and allows the incrementing of an extract to the next file in the sequence.
***Always “STOP EXTRACT
You can use this command to make any changes using any of the options to the ADD EXTRACT command (above). So, for example, you can ALTER EXTRACT an extract file to begin at a specific RBA for skipping over an entry in the extract file.
Ex.: ALTER EXTRACT finance, EXTSEQNO 26, EXTRBA 338
You can change any of the attributes specified with the ADD EXTRACT command, except for the following:
□ Altering an Extract specified with the EXTTRAILSOURCE
option.
□ Altering the number of RAC threads specified with the THREADS
option.
SEND EXTRACT Allows you to send commands to a running extract process.
Some useful options are:
GETLAG shows lag time between the extract and the data source
LOGEND shows whether or not extract has processed all record in
the data source
ROLLOVER makes extract increment to the next file in the trail upon
startup
SHOWTRANS shows information about current transactions:
□ Process Checkpoint
□ Transaction ID
□ Extract Group Name
□ Redo Thread Number
□ Timestamp of first transaction of extract
□ System Change Number (SCN)
□ RBA and Redo Log Number
□ STATUS ( commit after it has been forced by FORCETRANS
(Pending Commit) or “running”)
Monday, July 26, 2010
Troubleshooting Oracle GoldenGate
Troubleshooting Commands
Extract
If there are any errors, the extract would appear as STOPPED. In this case, you have to investigate the errors.
First, check the report file C:\OGG10G\dirrpt\EMP_EXT.rpt.
Next, see if there are any indicative messages in the log files ggserr.log and sqlnet.log, which are in the main C:\OGG10G directory.
The ggserr.log file contains event information such as:
2009-12-02 14:53:26 GGS INFO 301 Oracle GoldenGate
Manager for Oracle, mgr.prm: Command received from GGSCI on host 10.10.10.10 (START EXTRACT EMP_EXT ).
2009-12-02 14:53:26 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm:
EXTRACT EMP_EXT starting.
These events can also be seen in the following way:
GGSCI (AnonDB) 1> VIEW GGSEVT
…
….
2009-12-02 15:09:34 GGS INFO 302 Oracle GoldenGate Manager for Oracle,
mgr.prm: EXTRACT EMP_EXT starting.
2009-12-02 15:13:26 GGS INFO 399 Oracle GoldenGate Command Interpreter
for Oracle: GGSCI command (AnonDB): EDIT PARAM emp_ext.
The sqlnet.log file may show errors such as
TNS-12557: TNS:protocol adapter not loadable
This particular error indicates a database connection issue, so you need to explore at the listener level. Make sure the Path includes the Oracle Home bin subdirectory right at the start. There may be other TNS errors that may indicate other solutions.
EMP_EXT.rpt may show an error such as the following:
2009-12-08 13:01:27 GGS ERROR 182 OCI Error beginning
session (status = 28009-ORA-28009: connection as SYS should be as SYSDBA or SYSOPER).
2009-12-08 13:01:27 GGS ERROR 190 PROCESS ABENDING.
This indicates that the Extract process is trying to log on as sys without being the SYSDBA. Simply change the login in the parameter file for the Extract to “system”.
I should note here that specifying SYSDBA at the end of the login line in the Extract parameter file doesn’t work, although it does work with the DBLOGIN command in GGSCI that you will see next.
The ggserr.log file may show similar errors:
2009-12-03 00:43:16 GGS INFO 399 Oracle GoldenGate
Command Interpreter for Oracle: GGSCI command (AnonDB): start manager.
2009-12-03 00:43:25 GGS ERROR 182 Oracle GoldenGate
Manager for Oracle, mgr.prm: OCI Error during OCIServerAttach
(status = 12154-ORA-12154: TNS:could not resolve the connect identifier specified).
2009-12-03 00:43:25 GGS ERROR 190 Oracle GoldenGate
Manager for Oracle, mgr.prm: PROCESS ABENDING.
This indicates an issue with the login credentials supplied in the Manager parameter file, mgr.prm. The Manager has not started as a result.
Another possible error can be seen in the Extract report file. For example, EMP_EXT.rpt can have this error:
2009-12-07 16:40:08 GGS ERROR 190 No minimum supplemental
logging is enabled. This may cause extract process to handle key
update incorrectly if key column is not in first row piece.
2009-12-07 16:40:08 GGS ERROR 190 PROCESS ABENDING.
The solution to this is obviously to enable supplemental logging at the database level. C
Replicat
The first step is to use the GGSCI command DBLOGIN to connect to the database, so that other commands can be issued that will work on the database.
Note that DBLOGIN works fine as follows:
GGSCI ( AnonDB) 1>
DBLOGIN USERID system@localhost:1521/FIPRD3 PASSWORD fipassword1
Successfully logged into database.
Now you need to add a checkpoint table for the employees table that you are replicating:
GGSCI ( AnonDB) 2> ADD CHECKPOINTTABLE HR.EMPLOYEES_CHKPT
Successfully created checkpoint table HR.EMPLOYEES_CHKPT.
Checkpoints that are stored in this table refer to the current read and write positions of the Replicat process. This is used to prevent the loss of data in case the process needs to be restarted, or if there is any fault in the server or a hiccup in the network that would otherwise result in data loss. The other advantage is that multiple Extract or Replicat processes can be read from the same set of trails by using checkpoints.
Checkpoints are optional in the sense that they are not required for Extract and Replicat processes that run in batch mode, because such processes can always be restarted. However, checkpoints are necessary in the case of continuously operating Extract and Replicat processes.. They are normally maintained as files in the dirchk subdirectory, but in the case of Replicat they can optionally be stored in the database in the checkpoint table.
If you specify the checkpoint table in the GLOBALS parameter file, the above command can use that specification. In this scenario, the command could simply be
GGSCI (AnonDB) 3> ADD CHECKPOINTTABLE
No checkpoint table specified, using GLOBALS specification (hr.employees_chkpt).
Successfully created checkpoint table HR.EMPLOYEES_CHKPT.
You can now add the Replicat group as follows, specifying the exact same EXTTRAIL that was used by the Extract group set up in the first database. So the Replicat group feeds on or consumes the trail created by the Extract group:
GGSCI (AnonDB) 4> ADD REPLICAT emp_rep, EXTTRAIL C:\OGG10G\dirdat\et, CHECKPOINTTABLE hr.employees_chkpt,
REPLICAT added.
Edit the parameter file for this Replicat group as follows:
GGSCI (AnonDB) 5> EDIT PARAM emp_rep
In the new file, enter the following:
REPLICAT emp_rep
USERID system@localhost:1521/FIPRD3, PASSWORD fipassword1
ASSUMETARGETDEFS
MAP hr.employees, TARGET hr.employees;
Because the tables have exactly the same DDL structure, you use the ASSUMETARGETDEFS parameter.
Now you can start the Replicat group:
GGSCI (AnonDB) 6> start REPLICAT emp_rep
Sending START request to MANAGER (‘GGSMGR’) …
REPLICAT EMP_REP starting
Wait a few seconds to see the status; if you try immediately, the status may say “stopped.” When you see the status as “running”, check the detailed information, and also issue an info all command to show all running processes:
GGSCI (AnonDB) 7> status REPLICAT emp_rep
REPLICAT EMP_REP: STOPPED
GGSCI (AnonDB) 8> status REPLICAT emp_rep
REPLICAT EMP_REP: RUNNING
GGSCI (AnonDB) 11> info REPLICAT emp_rep detail
REPLICAT EMP_REP Last Started 2009-12-08 13:35 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File C:\OGG10G\dirdat\et000001
2009-12-08 13:33:24.000000 RBA 985
Extract Source Begin End
C:\OGG10G\dirdat\et000001 2009-12-08 13:33 2009-12-08 13:33
C:\OGG10G\dirdat\et000000 * Initialized * 2009-12-08 13:33
Current directory C:\OGG10G
Report file C:\OGG10G\dirrpt\EMP_REP.rpt
Parameter file C:\OGG10G\dirprm\EMP_REP.prm
Checkpoint file C:\OGG10G\dirchk\EMP_REP.cpr
Checkpoint table HR.EMPLOYEES_CHKPT
Process file C:\OGG10G\dirpcs\EMP_REP.pcr
Error log C:\OGG10G\ggserr.log
GGSCI (AnonDB) 12> info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EMP_EXT 00:00:00 00:00:03
REPLICAT RUNNING EMP_REP 00:00:00 00:00:06
Extract
If there are any errors, the extract would appear as STOPPED. In this case, you have to investigate the errors.
First, check the report file C:\OGG10G\dirrpt\EMP_EXT.rpt.
Next, see if there are any indicative messages in the log files ggserr.log and sqlnet.log, which are in the main C:\OGG10G directory.
The ggserr.log file contains event information such as:
2009-12-02 14:53:26 GGS INFO 301 Oracle GoldenGate
Manager for Oracle, mgr.prm: Command received from GGSCI on host 10.10.10.10 (START EXTRACT EMP_EXT ).
2009-12-02 14:53:26 GGS INFO 302 Oracle GoldenGate Manager for Oracle, mgr.prm:
EXTRACT EMP_EXT starting.
These events can also be seen in the following way:
GGSCI (AnonDB) 1> VIEW GGSEVT
…
….
2009-12-02 15:09:34 GGS INFO 302 Oracle GoldenGate Manager for Oracle,
mgr.prm: EXTRACT EMP_EXT starting.
2009-12-02 15:13:26 GGS INFO 399 Oracle GoldenGate Command Interpreter
for Oracle: GGSCI command (AnonDB): EDIT PARAM emp_ext.
The sqlnet.log file may show errors such as
TNS-12557: TNS:protocol adapter not loadable
This particular error indicates a database connection issue, so you need to explore at the listener level. Make sure the Path includes the Oracle Home bin subdirectory right at the start. There may be other TNS errors that may indicate other solutions.
EMP_EXT.rpt may show an error such as the following:
2009-12-08 13:01:27 GGS ERROR 182 OCI Error beginning
session (status = 28009-ORA-28009: connection as SYS should be as SYSDBA or SYSOPER).
2009-12-08 13:01:27 GGS ERROR 190 PROCESS ABENDING.
This indicates that the Extract process is trying to log on as sys without being the SYSDBA. Simply change the login in the parameter file for the Extract to “system”.
I should note here that specifying SYSDBA at the end of the login line in the Extract parameter file doesn’t work, although it does work with the DBLOGIN command in GGSCI that you will see next.
The ggserr.log file may show similar errors:
2009-12-03 00:43:16 GGS INFO 399 Oracle GoldenGate
Command Interpreter for Oracle: GGSCI command (AnonDB): start manager.
2009-12-03 00:43:25 GGS ERROR 182 Oracle GoldenGate
Manager for Oracle, mgr.prm: OCI Error during OCIServerAttach
(status = 12154-ORA-12154: TNS:could not resolve the connect identifier specified).
2009-12-03 00:43:25 GGS ERROR 190 Oracle GoldenGate
Manager for Oracle, mgr.prm: PROCESS ABENDING.
This indicates an issue with the login credentials supplied in the Manager parameter file, mgr.prm. The Manager has not started as a result.
Another possible error can be seen in the Extract report file. For example, EMP_EXT.rpt can have this error:
2009-12-07 16:40:08 GGS ERROR 190 No minimum supplemental
logging is enabled. This may cause extract process to handle key
update incorrectly if key column is not in first row piece.
2009-12-07 16:40:08 GGS ERROR 190 PROCESS ABENDING.
The solution to this is obviously to enable supplemental logging at the database level. C
Replicat
The first step is to use the GGSCI command DBLOGIN to connect to the database, so that other commands can be issued that will work on the database.
Note that DBLOGIN works fine as follows:
GGSCI ( AnonDB) 1>
DBLOGIN USERID system@localhost:1521/FIPRD3 PASSWORD fipassword1
Successfully logged into database.
Now you need to add a checkpoint table for the employees table that you are replicating:
GGSCI ( AnonDB) 2> ADD CHECKPOINTTABLE HR.EMPLOYEES_CHKPT
Successfully created checkpoint table HR.EMPLOYEES_CHKPT.
Checkpoints that are stored in this table refer to the current read and write positions of the Replicat process. This is used to prevent the loss of data in case the process needs to be restarted, or if there is any fault in the server or a hiccup in the network that would otherwise result in data loss. The other advantage is that multiple Extract or Replicat processes can be read from the same set of trails by using checkpoints.
Checkpoints are optional in the sense that they are not required for Extract and Replicat processes that run in batch mode, because such processes can always be restarted. However, checkpoints are necessary in the case of continuously operating Extract and Replicat processes.. They are normally maintained as files in the dirchk subdirectory, but in the case of Replicat they can optionally be stored in the database in the checkpoint table.
If you specify the checkpoint table in the GLOBALS parameter file, the above command can use that specification. In this scenario, the command could simply be
GGSCI (AnonDB) 3> ADD CHECKPOINTTABLE
No checkpoint table specified, using GLOBALS specification (hr.employees_chkpt).
Successfully created checkpoint table HR.EMPLOYEES_CHKPT.
You can now add the Replicat group as follows, specifying the exact same EXTTRAIL that was used by the Extract group set up in the first database. So the Replicat group feeds on or consumes the trail created by the Extract group:
GGSCI (AnonDB) 4> ADD REPLICAT emp_rep, EXTTRAIL C:\OGG10G\dirdat\et, CHECKPOINTTABLE hr.employees_chkpt,
REPLICAT added.
Edit the parameter file for this Replicat group as follows:
GGSCI (AnonDB) 5> EDIT PARAM emp_rep
In the new file, enter the following:
REPLICAT emp_rep
USERID system@localhost:1521/FIPRD3, PASSWORD fipassword1
ASSUMETARGETDEFS
MAP hr.employees, TARGET hr.employees;
Because the tables have exactly the same DDL structure, you use the ASSUMETARGETDEFS parameter.
Now you can start the Replicat group:
GGSCI (AnonDB) 6> start REPLICAT emp_rep
Sending START request to MANAGER (‘GGSMGR’) …
REPLICAT EMP_REP starting
Wait a few seconds to see the status; if you try immediately, the status may say “stopped.” When you see the status as “running”, check the detailed information, and also issue an info all command to show all running processes:
GGSCI (AnonDB) 7> status REPLICAT emp_rep
REPLICAT EMP_REP: STOPPED
GGSCI (AnonDB) 8> status REPLICAT emp_rep
REPLICAT EMP_REP: RUNNING
GGSCI (AnonDB) 11> info REPLICAT emp_rep detail
REPLICAT EMP_REP Last Started 2009-12-08 13:35 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File C:\OGG10G\dirdat\et000001
2009-12-08 13:33:24.000000 RBA 985
Extract Source Begin End
C:\OGG10G\dirdat\et000001 2009-12-08 13:33 2009-12-08 13:33
C:\OGG10G\dirdat\et000000 * Initialized * 2009-12-08 13:33
Current directory C:\OGG10G
Report file C:\OGG10G\dirrpt\EMP_REP.rpt
Parameter file C:\OGG10G\dirprm\EMP_REP.prm
Checkpoint file C:\OGG10G\dirchk\EMP_REP.cpr
Checkpoint table HR.EMPLOYEES_CHKPT
Process file C:\OGG10G\dirpcs\EMP_REP.pcr
Error log C:\OGG10G\ggserr.log
GGSCI (AnonDB) 12> info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EMP_EXT 00:00:00 00:00:03
REPLICAT RUNNING EMP_REP 00:00:00 00:00:06
Saturday, July 3, 2010
Oracle GoldenGate Best Practices and Tips
Lately I've been working, once again, with GoldenGate (now Oracle GoldenGate) data integration software. GoldenGate offers tremendously useful capabilities which include CDC (Change Data Capture), Data Warehouse ETL, efficient/low impact data replication from diverse database management systems, real time standby database maintenance (for high-availability, upgrades and patches, feeding Oracle Data Integrator (ODI) and data distribution. So, I thought I'd offer some GoldenGate Best Practices and Tips that I've learned largely by making mistakes:
I. Best Practices
PARALLEL PROCESSING
Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA.
Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures).
Group tables that have R.I. to each other in the same Extract-Replicat pair.
Pair each Replicat with its own trail and corresponding Extract process.
When using parallel Replicats, configure each one to process a different portion of the overall data.
PASSTHRU PARAMETER
Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP.
In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed.
This saves database fetches to improve performance.
INSERTAPPEND
A new GoldenGate 10.4 feature.
Use for large transactions .
Puts records at end of table rather than doing a more costly insert into other areas of table.
DATAPUMP (not the Oracle DB utility)
1. Primary Extract group writes to a trail on the source system.
2. Reads this trail and sends the data across the network to a remote
trail on the target.
3. Adds storage flexibility and also serves to isolate the primary
Extract process from TCP/IP activity.
4. Can be configured for online or batch.
5. Can perform data filtering, mapping, and conversion, or it can be
configured in pass-through mode, where data is passively
transferred as-is, without manipulation.
6. Use to perform filtering thereby removing that processing overhead
from the primary extract group.
7. Use one or more pumps for each source and each target for
parallelism.
In most business cases, it is best practice to use a data pump. Some reasons for using a data pump include the following:
● Protection against network and target failures:
In a basic GoldenGate configuration, with only a trail on the target system, there is nowhere on the source system to store data that the Extract process continuously extracts into memory. If the network or the target system becomes unavailable, the primary Extract could run out of memory and abend. However, with a trail and data pump on the source system, captured data can be moved to disk, preventing the abend. When connectivity is restored, the data pump extracts the data from the source trail and sends it to the target system(s).
● You are implementing several phases of data filtering or transformation. When using complex filtering or data transformation configurations, you can configure a data pump to perform the first transformation either on the source system or on the target system,
and then use another data pump or the Replicat group to perform the second transformation.
● Consolidating data from many sources to a central target. When synchronizing multiple source databases with a central target database, you can store extracted data on each source system and use data pumps on each of those systems to send the data to a trail
on the target system. Dividing the storage load between the source and target systems reduces the need for massive amounts of space on the target system to accommodate data arriving from multiple sources.
● Synchronizing one source with multiple targets. When sending data to multiple target systems, you can configure data pumps on the source system for each target. If network connectivity to any of the targets fails, data can still be sent to the other targets
STEP BY STEP Datapump Configuration
ON THE SOURCE SYSTEM
To configure the Manager process . (Reference: Oracle GoldenGate Administration Guide, Version 10.4):
1. On the source, configure the Manager process according to the instructions in Chapter 2.
2. In the Manager parameter file, use the PURGEOLDEXTRACTS parameter to control the purging of files from the local trail.
To configure the primary Extract group:
3. On the source, use the ADD EXTRACT command to create a primary Extract group. For documentation purposes, this group is called ext.
ADD EXTRACT, TRANLOG, BEGIN
I. Best Practices
PARALLEL PROCESSING
Ensure the system has enough shared memory. GoldenGate runs as an Oracle process. Each Extract or Replicat process requires upwards of 25-50 MB of system shared memory. This means less memory for the Oracle DBMS, especially the SGA.
Use parallel Replicat groups on the target system to reduce latency thru parallelism. Consider parallel Extract groups for tables that are fetch intensive (e.g., those that trigger SQL procedures).
Group tables that have R.I. to each other in the same Extract-Replicat pair.
Pair each Replicat with its own trail and corresponding Extract process.
When using parallel Replicats, configure each one to process a different portion of the overall data.
PASSTHRU PARAMETER
Consider using this parameter if there is no filtering, conversion or mapping required and you’re using DATAPUMP.
In pass-through mode, the Extract process does not look up table definitions, either from the database or from a data definitions file. Pass-through mode increases the throughput of the data pump, because all of the functionality that looks up object definitions is bypassed.
This saves database fetches to improve performance.
INSERTAPPEND
A new GoldenGate 10.4 feature.
Use for large transactions .
Puts records at end of table rather than doing a more costly insert into other areas of table.
DATAPUMP (not the Oracle DB utility)
1. Primary Extract group writes to a trail on the source system.
2. Reads this trail and sends the data across the network to a remote
trail on the target.
3. Adds storage flexibility and also serves to isolate the primary
Extract process from TCP/IP activity.
4. Can be configured for online or batch.
5. Can perform data filtering, mapping, and conversion, or it can be
configured in pass-through mode, where data is passively
transferred as-is, without manipulation.
6. Use to perform filtering thereby removing that processing overhead
from the primary extract group.
7. Use one or more pumps for each source and each target for
parallelism.
In most business cases, it is best practice to use a data pump. Some reasons for using a data pump include the following:
● Protection against network and target failures:
In a basic GoldenGate configuration, with only a trail on the target system, there is nowhere on the source system to store data that the Extract process continuously extracts into memory. If the network or the target system becomes unavailable, the primary Extract could run out of memory and abend. However, with a trail and data pump on the source system, captured data can be moved to disk, preventing the abend. When connectivity is restored, the data pump extracts the data from the source trail and sends it to the target system(s).
● You are implementing several phases of data filtering or transformation. When using complex filtering or data transformation configurations, you can configure a data pump to perform the first transformation either on the source system or on the target system,
and then use another data pump or the Replicat group to perform the second transformation.
● Consolidating data from many sources to a central target. When synchronizing multiple source databases with a central target database, you can store extracted data on each source system and use data pumps on each of those systems to send the data to a trail
on the target system. Dividing the storage load between the source and target systems reduces the need for massive amounts of space on the target system to accommodate data arriving from multiple sources.
● Synchronizing one source with multiple targets. When sending data to multiple target systems, you can configure data pumps on the source system for each target. If network connectivity to any of the targets fails, data can still be sent to the other targets
STEP BY STEP Datapump Configuration
ON THE SOURCE SYSTEM
To configure the Manager process . (Reference: Oracle GoldenGate Administration Guide, Version 10.4):
1. On the source, configure the Manager process according to the instructions in Chapter 2.
2. In the Manager parameter file, use the PURGEOLDEXTRACTS parameter to control the purging of files from the local trail.
To configure the primary Extract group:
3. On the source, use the ADD EXTRACT command to create a primary Extract group. For documentation purposes, this group is called ext.
ADD EXTRACT