Monday, March 1, 2021

 Oracle Golden Gate - Integrated or Classic mode - 12.2 - Tuning

Find lag for integrated extract:

SELECT capture_name, (86400*(available_message_create_timecapture_message_create_time)) lag_in_seconds FROM GV$GOLDENGATE_CAPTURE;

Find lag for integrated replicate:

SELECT r.apply_name, (86400*(r.dequeue_time -c.lwm_message_create_time)) latency_in_seconds

FROM GV$GG_APPLY_READER r, GV$GG_APPLY_COORDINATOR c 

WHERE r.apply# = c.apply# AND r.apply_name= c.apply_name;

GG Error log:

You can add lag parameters to the manager parameter file to capture lag alerts in the ggserror.log file

LAGREPORTMINUTES 5

LAGINFOMINUTES 5

LAGCRITICALMINUTES 15


Best Practices: Configuring Oracle GoldenGate for the Best Performance :

CPUs to enable parallelism :

Integrated Extract/Replicat- MAX_SGA_SIZE and PARALLELISM

Extract: 

TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 1024, parallelism 2)

Replicat: 

DBOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 1024, parallelism 2)

STREAM_POOL_SIZE :

alter system set STREAMS_POOL_SIZE=3G scope=both;

By default, one Integrated Extract requests the logmining server to run with MAX_SGA_SIZE of 1GB.

MAX_SGA_SIZE * (Number of Extract)* (Parallelism in Extract) *1.25 


A high-throughput network with the optimized TCP configurations:Example:

RMTHOST gg_hostname, MGRPORT 7840, TCPBUFSIZE 3000000, TCPFLUSHBYTES 3000000

Refer:

Excessive LAG on OGG Extract Data Pump Sending Data over WAN, Calculate TCP TCPBUFSIZE TCPFLUSHBYTES (Doc ID 1071892.1)

Compressed trail files if the network is constrained:

RMTHOST gg_hostname, MGRPORT 7840,TCPBUFSIZE 3000000, TCPFLUSHBYTES 3000000,COMPRESS


Best Practice:

Enable supplemental logging –TRANDATA or SCHEMATRANDATA

If using oracle DBs, configure integrated extract and replicate

Use PASSTHRU in data pump when no filtering is used

Default packet size of data sent to RMTHOST :30,000 byes

Use table level filters to extract required tables

Perform data filter at pump level so less data is sent over the network

Perform data conversion on the replicate side

Do not use DDL INCLUDE ALL until required

Fast Disk I/Os to speed up the trail files read/write


Batch Transactions:

Replicat applies transactions one by one

Check AWR for high waits on LogMinerprocess

By default, GG extract will eat up 80% of SGA size. This can cause performance issues in the source database.

Limit the amount of SGA to be used by GG extract using below in extract parameter file

TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2560, PARALLELISM 2)

Parallelism 2 will configure 2 log miner service. Default is 2

TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2560, PARALLELISM 3)

Parallelism 3 will configure 3 log miner service. Default is 2


RANGE and THREAD/THREADRANGE Coordinated Delivery - 12c:

Splitting Large Table with Coordinated Delivery:

replicat.prm

MAP jay.trade, TARGET rpt.trade, THREADRANGE(1-3, TRADEID));

The key benefits of coordinated replication are:

A single parameter file

Automatic control of the replication order and to ensure the correctness of the transaction splitting process. 

Enhancements to facilitate error recovery.

OR

splitting large tables into multiple Delivery processes using RANGE:

REP1.PRM:MAP jay.trade, TARGET rpt.trade,FILTER (@RANGE (1, 3, TRADEID));

REP2.PRM:MAP jay.trade, TARGET rpt.trade,FILTER (@RANGE (2, 3, TRADEID));

REP3.PRM:MAP jay.trade, TARGET rpt.trade,FILTER (@RANGE (3, 3, TRADEID));

Note: THREAD and THREADRANGE may be used with wildcard MAP statements.


Reference:

Note 1448324.1 OGG Integrated Healthcheck Script

Note 1488668.1 GoldenGate Performance Data Gathering

Note 1596493.1 GoldenGate Integration with AWR

Note 1485620.1 Best Practices Downstream Capture

MAA white paper Oracle GoldenGate Performance Best Practices

https://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf

Friday, January 29, 2021

 How to Stop and Start a Pluggable Database - automatically ?

$srvctl start database -d dbname

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3 TEST1                               MOUNTED

SQL> alter pluggable database TEST1 open instances=all;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3 TEST1                               READ WRITE NO

SQL> alter pluggable database TEST1 save state instances=all;

Pluggable database altered.

SQL> SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

CON_NAME                       INSTANCE_NAME                  STATE

------------------------------ ------------------------------ --------------

TEST1                          TESTY1                         OPEN

TEST1                          TESTY2                         OPEN


Thursday, January 28, 2021

How to find default tablespace type - Oracle 19c ?

select property_value from database_properties where property_name = 'DEFAULT_TBS_TYPE';

PROPERTY_VALUE

--------------------------------------------------------------------------------

BIGFILE

If the default tablespace type was set to BIGFILE at database creation, you need not specify the keyword BIGFILE in the CREATE TABLESPACE statement. A bigfile tablespace is created by default.

If the default tablespace type was set to BIGFILE at database creation, but you want to create a traditional (smallfile) tablespace, then specify a CREATE SMALLFILE TABLESPACE statement to override the default tablespace type for the tablespace that you are creating.

 

How to enable ddl logging in Oracle Container database 19c ?

Solution:

At CDB:

SQL> ALTER SYSTEM SET enable_ddl_logging=TRUE SCOPE=BOTH;

-- ALTER SYSTEM SET enable_ddl_logging=FALSE SCOPE=BOTH; -- Turn off

Any PDB:

create table tbl_a (num number);

alter table tbl_a add num2 number;

Check ddl logging with new log file:

$DUMP_LOCATION/log/ddl_dbname.log  and

$DUMP_LOCATION/log/ddl/log.xml

ddl_dbname.log or log.xml:

2021-01-28T13:46:41.640037-05:00

diag_adl:create table tbl_a (num number)

2021-01-28T13:50:20.725803-05:00

diag_adl:alter table tbl_a add num2 number






Saturday, December 26, 2020

Oracle 12.2 and Oracle 19.6 - Container Database START/SHUTDOWN :
SQL*Plus: Release 19.0.0.0.0 - Production on Sat Dec 26 20:32:30 2020
Version 19.6.0.0.0

SQL> startup nomount;
ORACLE instance started.
Total System Global Area 5368705960 bytes
Fixed Size                  9159592 bytes
Variable Size            2315255808 bytes
Database Buffers         2936012800 bytes
Redo Buffers              108277760 bytes

CON_NAME
------------------------------
CDB$ROOT

SQL> select name,open_mode,con_id,dbid from v$containers;
no rows selected

No CDB or PDBs info at NOMOUNT state. NOMOUNT use for re-create or restore a missing control file for the CDB instance.

SQL> alter database mount;
Database altered.

SQL> select name,open_mode,cdb from v$database;
NAME                           OPEN_MODE            CDB
------------------------------ -------------------- ---
CJTED                          MOUNTED              YES

SQL> SELECT con_id, name, open_mode FROM v$pdbs;
    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                   MOUNTED
         3 PJTED1                         MOUNTED
         4 PJTED2                         MOUNTED

CDB’s control files opened for the instance, but both the CDB$ROOT and all PDBs are changed to the MOUNT state.

SQL> alter database open; or alter database CJTED open;
Database altered.

SQL> select name,open_mode,con_id,dbid from v$containers;
NAME                           OPEN_MODE      CON_ID       DBID
------------------------------ ---------- ---------- ----------
CDB$ROOT                  READ WRITE          1  105692156
PDB$SEED                   READ ONLY           2  657829016
PJTED1                         READ WRITE          3 1002662416         >>> Oracle 12.2   
PJTED2                         READ WRITE          4  531562146

SQL> SELECT con_name, instance_name, state FROM dba_pdb_saved_states;   >>> Oracle 12.2
no rows selected

CDB$ROOT is OPEN, it’s available for READ and WRITE operations. 
All PDBs are still READ WRITE with the seed database PDB$SEED mounted as READ ONLY:
dba_pdb_saved_states is valid for Oracle 12.1

========================= shutdown ================================

SQL> ALTER PLUGGABLE DATABASE ALL CLOSE IMMEDIATE;
Pluggable database altered.

SQL> select name,open_mode,con_id,dbid from v$containers;
NAME                           OPEN_MODE      CON_ID       DBID
------------------------------ ---------- ---------- ----------
CDB$ROOT                      READ WRITE          1  105692156      >>>> ROOT
PDB$SEED                       READ ONLY           2  657829016       >>>> SEED
PJTED1                            MOUNTED             3 1002662416
PJTED2                            MOUNTED             4  531562146
CSTG1                             MOUNTED             5 2604502831

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

It will shutdown ALL PDBs, PDB$SEED & CDB$ROOT inside CDB.


Tuesday, December 22, 2020


Oracle 19c:

ORA-65040: operation not allowed from within a pluggable database, indicates an operation was attempted that can only be performed in the root container.


Issue:

SQL> CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/zfs/dev/expdp/abc';

CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/zfs/dev/expdp/abc'

*

ERROR at line 1:

ORA-65040: operation not allowed from within a pluggable database

Solution:

SQL> alter session set container=cdb$root;

Session altered.

SQL> SELECT DIRECTORY_NAME,DIRECTORY_PATH FROM DBA_DIRECTORIES where DIRECTORY_NAME = 'DATA_PUMP_DIR';

DIRECTORY_NAME          DIRECTORY_PATH

DATA_PUMP_DIR /u01/app/oracle/product/19.0.0.0/dbhome_2/rdbms/log/

SQL> CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR AS '/zfs/dev/expdp/abc';

Directory created.

SQL> SELECT DIRECTORY_NAME,DIRECTORY_PATH FROM DBA_DIRECTORIES where DIRECTORY_NAME = 'DATA_PUMP_DIR';

DIRECTORY_NAME  DIRECTORY_PATH

DATA_PUMP_DIR   /zfs/backup01/dev/expdp/abc

Refer:

ORA-65040: operation not allowed from within a pluggable database" (Doc ID 2122655.1)


Tuesday, December 1, 2020

Sequence - Oracle 11g - 12c - 18c:

1. Oracle 11g - Sequence and Trigger for auto increment columns :

CREATE TABLE T1 ( ID NUMBER(10) NOT NULL,

DESCRIPTION VARCHAR2(50) NOT NULL);

ALTER TABLE T1 ADD ( CONSTRAINT T1_pk PRIMARY KEY (ID));

CREATE SEQUENCE  SEQ_11g START WITH 1;


CREATE OR REPLACE TRIGGER T1_trg

BEFORE INSERT ON T1

FOR EACH ROW


BEGIN

SELECT SEQ_11g.NEXTVAL

INTO :new.id

FROM dual;

END;

/


2. IDENTITY column on Oracle 12c:

CREATE TABLE  T1 ( c1 NUMBER GENERATED by default on null as IDENTITY,

c2 VARCHAR2(10));

or specify starting and increment values, also preventing any insert into the identity column (GENERATED ALWAYS) (again, Oracle 12c+ only)


CREATE TABLE  T1 ( 

c1 NUMBER GENERATED ALWAYS as IDENTITY(START with 1 INCREMENT by 1),

c2 VARCHAR2(10));

Alternatively, Oracle 12 also allows to use a sequence as a default value:

CREATE SEQUENCE SEQ_12c START WITH 1;


CREATE TABLE T1 (

ID NUMBER(10) DEFAULT SEQ_12c.nextval NOT NULL,

DESCRIPTION VARCHAR2(50) NOT NULL);

ALTER TABLE T1 ADD ( CONSTRAINT T1_pk PRIMARY KEY (ID));


3. Oracle 18c - scalable sequence

Now in Oracle 18c, in those cases of data ingestion workloads with high level of concurrency, the new scalable sequence by generating unordered primary or unique key values helps in significantly reducing the sequence and index block contention caused by the right-handed indexes and thus provides better throughput, data load scalability and performance as compared to the pre -Oracle 19c solution of having to configuring a very large sequence cache using the CACHE clause of CREATE SEQUENCE  or ALTER SEQUENCE statement.

The scalable sequence is made up of a sequence offset number which by default contains 6 digits. The first 3 digits is derived from the instance number with 100 added and the next 3 digits is derived from the SID of that session.

create sequence seq_test_11g

start with 1 increment by 1

maxvalue 100; 

select seq_test_11g.nextval from dual;  

NEXTVAL  :    1 


create sequence seq_test_18c

start with 1 increment by 1

maxvalue 100

scale extend; >>>> (SID nnn, Instance_Number 001)


SQL> select seq_test_18c.nextval from dual;    

NEXTVAL :  101608001

SQL> select instance_number from v$instance; 

INSTANCE_NUMBER :      1 

SQL> select sid from v$mystat where rownum = 1;  

SID :    608

select sequence_name, scale_flag, extend_flag from user_sequences  

where sequence_name='SEQ_TEST_18c';

SEQ_TEST_18c                                                  Y Y

SEQ_TEST_11g                                                  N N