Tuesday, January 22, 2013

ORACLE - Manually apply archived log to the Standby database

Issue:  Manual standby database fail to apply archive log due to corruption of archive log.

ORA-00353: log corruption near block 118784 change 68177937 time 01/22/2013 08:37:43
ORA-00334: archived log: '/ArchLog/oracle/QNTP01/arch/arch_1_15929_800746832.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 118784 change 68177937 time 01/22/2013 08:37:43
ORA-00334: archived log: '/ArchLog/oracle/QNTP01/arch/arch_1_15929_800746832.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 118784 change 68177937 time 01/22/2013 08:37:43
ORA-00334: archived log: '/ArchLog/oracle/QNTP01/arch/arch_1_15929_800746832.arc'
ORA-283 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...

Soution:
At STANDBY Database:
Copy archive log from primary side
And
sqlplus -s '/ as sysdba' << EOF
recover standby database;
AUTO
EOF

Friday, January 18, 2013

ORACLE - Control files

1) Check control files:
1. show parameter control
2.select name, status from v$controlfile;
3.select controlfile_created from v$database;


2) How to re-create control file from trace or if you lost all contril files ?
alter database backup controlfile to trace;
alter database backup controlfile to trace as '/tmp/controlfile.txt';

Example:
--     Set #2. RESETLOGS case
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- The contents of online logs will be lost and all backups will
-- be invalidated. Use this only if online logs are damaged.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
--  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "jdb" RESETLOGS [NORESETLOGS] FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 4672
LOGFILE
  GROUP 1 (
    '+DG_DATA1/jdb/onlinelog/group_1.263.800998057',
    '+DG_DATA1/jdb/onlinelog/group_1.262.800998067'
  ) SIZE 300M BLOCKSIZE 512,
  GROUP 2 (
    '+DG_DATA1/jdb/onlinelog/group_2.267.800997685',
    '+DG_DATA1/jdb/onlinelog/group_2.266.800997695'
  ) SIZE 300M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  +DG_DATA1/jdb/datafile/system.256.800746777',
  '+DG_DATA1/jdb/datafile/sysaux.257.800746777',
  '+DG_DATA1/jdb/datafile/undotbs1.258.800746777',
  '+DG_DATA1/jdb/datafile/users.259.800746777',
  '+DG_DATA1/jdb/datafile/jdb_data01.280.801222241'
  CHARACTER SET AL32UTF8 [Character Set];

RECOVER DATABASE;
-- Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

3)CONTROL_FILE_RECORD_KEEP_TIME [default7] : Specifies the minimum number of days before a reusable record in the control file can be reused.

Note:This parameter applies only to records in the control file that are circularly reusable (such as archive log and RMAN backup ). It does not apply to records such as datafile, tablespace, and redo thread records.

Thursday, January 17, 2013

ORACLE - Non-ASM to ASM migration

How to move/migrate non-asm file syatem into asm  ?
1.Restore control file
2.Restore non-system data files
3.Restore database
4.Restore Redolog files (see my notes: ORACLE - Redolog files)
5.Restrore Temporary Tablespace  (see my notes: ORACLE - TEMP files )

1) Restore controlfile from NON-ASM to ASM :
select name from v$controlfile;
/u01/app/oracle/oradata/jdb/control01.ctl
/u01/app/oracle/oradata/jdb/control02.ctl

Modify the spfile and create pfile:
show parameter db_create [To check the asm disk group]
alter system set control_files='+jdata'scope=spfile;
alter system set  db_create_file_dest='+jdata' scope=spfile;
create pfile ='/tmp/new_pfile.ora' from spfile;  

Create a new pfile, startup with mount option and restore the control file:
Startup database nomount;
sql> startup nomount pfile='/tmp/new_pfile.ora'

rman target /
rman>restore controlfile to '+jdata' from '/u01/app/oracle/oradata/jdb/control01.ctl';
rman>sql 'alter database mount';

2) Restore datafiles   from NON-ASM to ASM:

Option 1)  Mount the database, copy the datafile into using rman and rename the data file:
select tablespace_name, file_name from dba_data_files;
TABLESPACE_NAME                FILE_NAME                                               
------------------------------ ------------------------------------------------------
USERS                          /u01/app/oracle/oradata/jdb/users01.dbf            
UNDOTBS1                 /u01/app/oracle/oradata/jdb/undotbs01.dbf        
SYSAUX                      /u01/app/oracle/oradata/jdb/sysaux01.dbf            
SYSTEM                      /u01/app/oracle/oradata/jdb/system01.dbf        
JXHIST_2013_1         +jdata/jdb/datafile/jxhist_2013_1.268.804772687
TB_JREFERENCE     +jdata/jdb/datafile/tb_jreference.269.804774715
TB_JTEST                   +jdata/jdb/datafile/tb_jtest.270.804863897   
7 rows selected.

rman target /
rman> copy datafile '/u01/app/oracle/oradata/jdb/users01.dbf' to '+jdata';
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=192 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/JDB/users01.dbf
output file name=+jdata/jdb/datafile/users.277.804975705 tag=TAG20130117T202145 RECID=1 STAMP=80
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01

startup mount
sql> alter database rename file '/u01/app/oracle/oradata/jdb/users01.dbf' to '+jdata/jdb/datafile/users.277.804975705';
sql>alter database open;

Option 2)  Offline the tablespace (non -system), copy the datafile into asm using rman and rename the data files:
alter tablespace users offline;
rman target /
copy datafile '/u01/app/oracle/oradata/jdb/users01.dbf' to '+jdata';
( get asm datafile name from rman output - see above)

sqlplus / as sysdba
alter database rename file '/u01/app/oracle/oradata/jdb/users01.dbf' to '+jdata/jdb/datafile/uses01.264.685642227';
alter tablespace users online;

3)Restore database from NON-ASM to asm: Mount the database and backup the database with asm format, update the control file and recover the database -
startup nomount;
$ rman target /
rman> restore controlfile to '+jdata' from '/u01/app/oracle/oradata/jdb/control01.ctl';
rman> sql 'alter database mount';
rman> backup as copy database format '+jdata'; [copy of whole database into new disk format]
rman> switch database to copy;[copy is created at new diskgroup now update control file]
rman> recover database;
rman> sql 'alter database open resetlogs';
rman> exit

ORACLE - Temporary Tablespace

Check temporary tablespace informations:
select  * from database_properties where property_name = 'DEFAULT_TEMP_TABLESPACE';
------------------------  -----
DEFAULT_TEMP_TABLESPACE   TEMP


select tablespace_name, contents, status from dba_tablespaces where tablespace_name like '%TEMP%';
TABLESPACE_NAME                CONTENTS  STATUS
------------------------------ --------- ---------
TEMP                           TEMPORARY ONLINE


Create a new temporary tablespace TEMP_X and make it default temporary tablespace, drop the old TEMP tablespace:
CREATE TEMPORARY TABLESPACE TEMP_X
TEMPFILE '+MWDATA1' SIZE 4000M  AUTOEXTEND ON EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;


ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_X;

SQL> select FILE_NAME, TABLESPACE_NAME from  dba_temp_files;
FILE_NAME                                          TABLESPACE_NAME
-------------------------------------------------- ----------------
+JDATA1/JDB/tempfile/temp_x.274.804949525     TEMP_X
/u01/app/oracle/oradata/JDB/temp01.dbf        TEMP


DROP TABLESPACE TEMP INCLUDING CONTENTS AND DATAFILES;

Create new temporary tablespace TEMP,make it deault temporary tablespace and drop the old TEMP_X tablespace:
CREATE TEMPORARY TABLESPACE TEMP
TEMPFILE '+MWDATA1' SIZE 4000M  AUTOEXTEND ON EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;


ALTER DATABASE DEFAULT TEMPORARY TABLESPACE  TEMP;

DROP TABLESPACE TEMP_X INCLUDING CONTENTS AND DATAFILES;

Resize a Temporary Tablespace:
alter tablespace temp resize 1000M;
alter database tempfile 'tempfile_name' resize 3000M;


Shrink a Temporary Tablespace:
alter tablespace temp shrink space;
alter tablespace temp shrink tempfile 'tempfile_name' keep 2000M;


RENAME a Temporary Tablespace:
alter tablespace temp rename to temp_x;

To view information about Temporary Tablespace:select * from dba_temp_files;
select * from v$tempfile;
select * from v$temp_space_header;


Monitor the tempoary tablspace usage:
select * from v$sort_usage;
select *  from v$sort_segment;


If you try to drop the default temporary tablespace, you will get following error message:
SQL> drop tablespace temp;
drop tablespace temp
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace

ORACLE - ARCHIVE LOG & NOARCHIVE LOG

How to setup archivelog and noarchivelog database ?

Set archive log file location and format at spfile:
alter system set log_archive_format ='arch_%t_%s_%r.arc' scope=spfile;
alter system set log_archive_dest_1 ='location=/ArchLog/oracle/JDB/arch'scope=spfile;

or
alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';

Startup database with mount option:
shutdown immediate;
startup mount;
alter database archivelog ; or [alter database noarchivelog ;]
alter database open ;


Verification:
1) archive log list;
2) show parameter log_archive
3) select log_mode from v$database;
LOG_MODE
-------------------------
NOARCHIVELOG

ORACLE - Redolog files

Oracle logfile day-to-day task:

1) Check logfile information:
SQL> select GROUP#, THREAD#, MEMBERS, STATUS from v$log;
    GROUP#    THREAD#    MEMBERS STATUS
---------- ---------- ---------- ----------------
         1          1          1 INACTIVE
         2          1          1 INACTIVE
         3          1          1 CURRENT


SQL>select member from v$logfile;
MEMBER
-------------------------------------------
/u01/app/oracle/oradata/JDB/redo03.log
/u01/app/oracle/oradata/JDB/redo02.log
/u01/app/oracle/oradata/JDB/redo01.log


2) Add new loggroup/logfiles before drop old logfiles:
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 4 ('+JDATA1') SIZE 500M;
or
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 5 ('+JDATA1',''+FRA') SIZE 500M;
or
ALTER DATABASE ADD LOGFILE THREAD 1
 GROUP 2 ('+JDATA1','+FRA') SIZE 500M,
 GROUP 3 ('+JDATA1','+FRA') SIZE 500M;
or
Use Grid control


3)Drop INACTIVE or UNUSED logfiles:
SQL> select group#,status from v$log;
    GROUP# STATUS
---------- ------------
         1 INACTIVE        [candidate for drop]
         2 INACTIVE       [candidate for drop]
         3 ACTIVE           [wait or alter system switch logfile;]
         4 ACTIVE          [ORA-01624 - needed crash recovery]
         5 CURRENT      [ORA-01623 - cannot drop]
         6 UNUSED


Use switch logfile command to chnage the status of logfiles and drop only INACTIVE logfile.

4) Oracle error messages for drop ACTIVE and CURRENT logfiles:
1)alter database drop logfile group 3
*
ERROR at line 1:
ORA-01624: log 3 needed for crash recovery of instance JDB (thread 1)
ORA-00312: online log 3 thread 1: '/u01/app/oracle/oradata/JDB/redo03.log'

2)alter database drop logfile group 5
*
ERROR at line 1:
ORA-01623: log 5 is current log for instance JDB (thread 1) - cannot drop
ORA-00312: online log 5 thread 1: '/u01/app/oracle/oradata/JDB/redo05.log'

Wednesday, January 16, 2013

ORACLE SQL Tunning 10046 & 10053 events

Use following script to check a event 10046 and 10053 for that slow query :
$ sqlplus /nolog
connect username/password
alter session set timed_statistics = true;
alter session set statistics_level=all;
alter session set max_dump_file_size = unlimited;
ALTER SESSION SET TRACEFILE_IDENTIFIER = '10046_QUERY001_SLOW';
alter session set events '10046 trace name context forever, level 8';
alter session set events '10053 trace name context forever, level 1';

My Slow QUERY (select * from table;)
select 'close the cursor' from dual;
exec dbms_session.reset_package;
alter session set events '10046 trace name context off';
alter session set events '10053 trace name context off';
quit


Check the trace file for query issue.

Ref Doc ID : 1274511.1/225598.1

ORACLE Data Pump - expdp & impdp

How to setup directory for data pump expdp at schema level?

CREATE OR REPLACE DIRECTORY DP_DUMP_DIR AS '/bkup/dbname/expdp';
GRANT READ, WRITE ON DIRECTORY DP_DUMP_DIR TO OPER;

col DIRECTORY_PATH for a40;
SELECT * FROM ALL_DIRECTORIES
;



Schema expdp:
expdp oper/pass directory=dp_dump_dir schemas=schemas dumpfile=schema.dmp logfile=schema.log

REMAP_SCHEMA=old_schema:new_schema
REMAP_TABLESPACE=tablespace1:DATA,tablespc2:DATA
DUMPFILE=schema_name_%U.dmp
TRANSFORM=oid:n (expdp X schema from a database and impdp remap_schma to Y schema on SAME database)
PARALLEL=2












ORACLE Database Link

Database links allow users to access a remote database.

How to create private db link ?
At remote database: 
1.create a user for db link and grant schema privileges to him:
create user dblink_user identified by password;
grant create session to dblink_user;
grant select on s_role to dblink_user;

At source database:
1. Add tnsname.ora file:
dbname =
  (description =
    (address = (protocol = tcp)(host = dbname.com)(port = 1521))
    (address = (protocol = tcp)(host = dbname_v.com)(port = 1521))
    (load_balance = yes)
    (connect_data =
      (server = dedicated)
      (service_name = dbname.com)
    )
  )

2.connect as a schema owner, create private db links and verify:
grant create database link to schema_owner; (if not exists)
conn schema_owner/password
create database link dblink1 connect to dblink_user identified by password using 'dbname';
select * from dba_db_links; [verification]

How to access remote database using dblink ?

1) Access remote database objects at local database: Example -
select count(*) from remote_schema_name.table_name@dblink1
;
2) Access the data from remote database and insert into local database : Example -
INSERT INTO schema_name.table_name
select * from remote_schema_name.table_name@dblink1
;
Note: Try to avoid PUBLIC dblink for security and audit purpose.






ORACLE - Transporting Tablespace

The following steps summarize the process of transporting a tablespace:
At source:
1) Check cross-platform transport for endian format for both platforms:
SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;


col PLATFORM_NAME for a32
SELECT * FROM V$TRANSPORTABLE_PLATFORM;

2) Check a self-contained set of tablespace:
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('history1',TRUE);
EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('history1,history2', TRUE);
or
Add TRANSPORT_FULL_CHECK=Y in expdp

Check the violations:
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

3) In the set, make all tablespace to read only:
ALTER TABLESPACE history1 READ ONLY;

4) Use Data Pump export utility:
expdp system/pass DUMPFILE=expdp.dmp DIRECTORY=dp_dump_dir
TRANSPORT_TABLESPACES= history1, history2 [ TRANSPORT_FULL_CHECK=Y]

At Target:
5) Import using Data Pump utility:
impdp system/pass PARFILE='par.txt'

par.txt
DIRECTORY=dp_dump_dir
DUMPFILE=expdp.dmp
TRASNPORT_DATAFILES=histroy1, history2
REMAP_SCHEMA=

6) Make tablespace read write:
ALTER TABLESPACE history1 READ WRITE;

7)(optional) If endian formats or OS are diffrent then use conversion ( at source before impdp):
RMAN target /
CONVERT TABLESPACE history1, history2
TO PLATFORM 'Solaris[tm] OE'
FORMAT '/tmp/%U';

Limitations:
1) Source and Target database MUST use same character/natonal character set
2) A tablespace name MUST not exists in source database i.e If tablespace already exists, you cannot transport a tablespace
3) Self contain tablespace


Tuesday, January 15, 2013

ORACLE Golden Gate

1. Download Golden Gate software 11.1.1.1
unzip fbo_ggs_Linux_x64_ora11g_64bit.zip
tar xvf fbo_ggs_Linux_x64_ora11g_64bit.tar
create a dir /ogg

2. Launch GGSCI
cd /ogg/
GGSCI

4. Create Sub directory:
GGSCI (soucredb1-lnx) 1> create subdirs
GGSCI (pricedev1-lnx) 1> create subdirs

===== Create GoldenGate database user ggs_owner on both source and target database =====

1. Create tablespace GGS_DATA
CREATE SMALLFILE TABLESPACE GGS_DATA
DATAFILE '+CFIP_DATA' SIZE 500M
AUTOEXTEND ON NEXT 100M MAXSIZE 30g LOGGING
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

2. Create user ggs_owner
create user ggs_owner identified by ********
default tablespace ggs_data
temporary tablespace temp;

3. Grant privilege to ggs_owner
grant connect,resource,dba to ggs_owner;
grant select any dictionary, select any table to ggs_owner;
grant create table to ggs_owner;
grant flashback any table to ggs_owner;
grant execute on dbms_flashback to ggs_owner;
grant execute on utl_file to ggs_owner;

===== Setup and start GoldenGate Manager Process on both source and target=====
GGSCI > Edit param MGR
PORT 7809
GGSCI > start manager

=====Enabling Database-Level Supplemental Logging on source database=====
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
--Verify that it is enabled
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

=====Enabling Table-Level Supplemental Logging on source database=====
GGSCI (oragdev1-lnx) 2> dblogin userid ggs_owner@CFIPDEV password ********
Successfully logged into database.
GGSCI (oragdev1-lnx) 5> add trandata ORACUSP.*
--Verify that it is enabled
SQL> select owner, log_group_name, table_name
from dba_log_groups where owner = 'ORACUSP';

=====Enabling DDL replication on source database=====
cd /ggs
-- following scripts were provided by GoldenGate
SQL> @marker_setup.sql
SQL> @ddl_setup.sql
SQL> @role_setup.sql
SQL> grant ggs_ggsuser_role to ggs_owner;
SQL> @ddl_enable.sql
-- Disable Recycle Bin
SQL> alter system set recyclebin = off scope=spfile;
stop and start datbase

=====Starging Extract on source database =====
1. Configuring the Local Extract
GGSCI (sourceserver) 1> edit params lcuspd1
EXTRACT lcuspd1
------------------------------------------------
-- Local extract for ORACUSIP on CFIPDEV
------------------------------------------------
SETENV (NLS_LANG = AMERICAN_AMERICA.AL32UTF8)
USERID ggs_owner@CFIPDEV, PASSWORD ********
EXTTRAIL dirdat/cfipdev/oracusp/l1
TRANLOGOPTIONS ASMUSER sys@ASM, ASMPASSWORD ********
-- TRANLOGOPTIONS EXCLUDEUSER ggs_owner
DDL INCLUDE MAPPED
TABLE oracusp.*;

2. Adding the Extract
GGSCI (sourceserver) > dblogin userid ggs_owner@CFIPDEV password ********
GGSCI (sourceserver) > ADD EXTRACT lcuspd1, TRANLOG, THREADs 2, BEGIN NOW
GGSCI (sourceserver) > ADD EXTTRAIL dirdat/cfipdev/oracusp/l1, EXTRACT lcuspd1, MEGABYTES 100

3. Starting the Extract
GGSCI (sourceserver) > START EXTRACT lcuspd1

4. Verifying the Extract
GGSCI (sourceserver) > info EXTRACT lcuspd1
GGSCI (sourceserver) > info EXTRACT lcuspd1, detail
GGSCI (sourceserver) > stats EXTRACT lcuspd1

=====Starging the Data Pump on source database =====
1. Configuring the Data Pump
GGSCI (sourceserver) 1> edit params pcuspd1
EXTRACT pcuspd1
-------------------------------------------------------------------
-- Data Pump extract for ORACUSP on CFIPDEV
-------------------------------------------------------------------
PASSTHRU
RMTHOST pricedev_clus, MGRPORT 7809
RMTTRAIL dirdat/cfipdev/oracusp/r1
TABLE ORACUSP.*;

2. Adding the Data Pump
GGSCI (sourceserver) > ADD EXTRACT pcuspd1, EXTTRAILSOURCE dirdat/cfipdev/oracusp/l1
GGSCI (sourceserver) > ADD RMTTRAIL dirdat/cfipdev/oracusp/r1, EXTRACT pcuspd1, MEGABYTES 100

3. Starting the Data Pump
GGSCI (sourceserver) > START EXTRACT pcuspd1

4. Verifying the Data Pump
GGSCI (sourceserver) > info EXTRACT pcuspd1
GGSCI (sourceserver) > info EXTRACT pcuspd1, detail
GGSCI (sourceserver) > stats EXTRACT pcuspd1

=====Loading the Data from source database to target database =====
1. Create tablespaces and schema user on target database
2. Use expdp to export source schema from source database
3, Copy dumpfile from source database server to target database server
4. Use impdp to import dumpfile to same schema on target database

=====Disable Triggers and Cascade-Delete Constraints on target database =====
set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
spool &&SCHEMA..disable_cascade_delete_constraints.sql
select 'alter table '||owner||'.'||table_name||
' disable constraint '||constraint_name||';'
from all_constraints
where delete_rule = 'CASCADE'
and owner = '&SCHEMA';
spool off
spool &SCHEMA..disable_triggers.sql
select 'alter trigger '||owner||'.'||trigger_name||
' disable ;'
from all_triggers
where owner = '&SCHEMA';
spool off


=====Starging the Replicat on target database =====
1. Adding Checkpoint Table
Edit GLOBALS file to Add following 2 lines:
GGSCHEMA GGS_OWNER
CHECKPOINTTABLE GGS_OWNER.CHKPTAB
GGSCI (targetserver) > dblogin userid ggs_owner@DBAPDEV password ********
GGSCI (targetserver) > add checkpointtable
2. Configuring the Replicat
GGSCI (targetserver) 1> edit params RCUSPD1
Replicat RCUSPD1
-------------------------------------------------------------------
-- Replicat for ORACUSP Schema
-------------------------------------------------------------------
SETENV (NLS_LANG = AMERICAN_AMERICA.AL32UTF8)
USERID ggs_owner@DBAPDEV, PASSWORD Abcd_1234
HandleCollisions
AssumeTargetDefs
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE RETRYOP -- handle ddl error caused abend.
Map ORACUSP.*, Target ORACUSP.* ;

3. Adding the Replicat
GGSCI (targetserver) > ADD REPLICAT RCUSPD1, EXTTRAIL dirdat/cfipdev/oracups/r1

4. Starting the Replicat
GGSCI (sourceserver) > START Replicat RCUSPD1

5. Verifying the Replicat
GGSCI (sourceserver) > info Replicat RCUSPD1
GGSCI (sourceserver) > info Replicat RCUSPD1, detail
GGSCI (sourceserver) > stats Replicat RCUSPD1

ORACEL RAC - IP, VIP, SCAN IP modification

1) How to modify IP, VIP and SCAN for RAC ?

Plan to change testrac1/2 RAC public IP, VIP and SCAN:

Summary

1.  All DBs, services, ASMs, listeners down (DBA)
2.  Registers new public IP and new VIP on DNS (SA/Network)
3.  Change /etc/hosts and /etc/rc.config.d with new IP (DBA)
4.  RAC public IP change via oifcfg (part I below, DBA)
5.  RAC VIP change via srvctl (part II below, DBA)
6.  Reboot server (DBA)
7.  Network engineer makes public IP change on LAN (Network)
8.  Finally test (DBA/App Team)
 
Before any following step, all databases and services, ASM, listeners will be shutdown.
I. Change Public IP:

    Use the getif option to show the currently configured interfaces.

% $ORA_CRS_HOME/bin/oifcfg getif
lan2  10.0.0.0  global  cluster_interconnect
lan0  138.69.21.0  global  public

Change Public IP
% $ORA_CRS_HOME/bin/oifcfg delif -global lan0
% $ORA_CRS_HOME/bin/oifcfg setif -global lan0/newIP:public

No private IP change here (10.0.0.1 and 10.0.0.2 remained same)
Verify
% $ORA_CRS_HOME/bin/oifcfg getif

II. Change VIP :
Find out current VIP and subnet

srvctl config nodeapps -n testrac1 -a   
(VIP exists.: /testrac1-vip/138.69.21.247/255.255.255.0/lan0)
srvctl config nodeapps -n testrac2 -a   
(VIP exists.: /testrac2-vip/138.69.21.162/255.255.255.0/lan0)

stop the nodeapps
srvctl stop nodeapps -n testrac1
Verify VIP is no longer running

Netstat -in
Make any changes necessary to testrac2's "/etc/hosts" file as root
Change VIP as root (according to SA, new subnet will be 255.255.254.0)
srvctl modify nodeapps -n testrac1 -A newIP/255.255.254.0/lan0
Verify
srvctl config nodeapps -n testrac1 -a   
Repeat step 1 to 6 on testrac2.
Bring up nodeapp, asm, listener, db, etc.
Final testing

III. Change SCAN IP:

SCAN Concepts:
1) Single client access name (SCAN) is the virtual hostname to provide for all clients connecting to the cluster (as opposed to the vip hostnames in 10g and 11gR1). 
2) SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
3) By default, the name used as the SCAN is also the name of the cluster and must be globally unique throughout your enterprise.
4) SCAN VIP addresses must be on the same subnet as virtual IP addresses and public IP addresses.
5) Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. But if you use the hosts file to resolve SCAN name, you can have only one SCAN IP address.
If hosts file is used to resolve SCAN hostname, you will receive Cluster Verification Utility failure at end of installation.
5) Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients.
6) Grid Infrastructure will start local listener LISTENER on all nodes to listen on local VIP, and SCAN listener LISTENER_SCAN1 (up to three cluster wide) to listen on SCAN VIP(s);    11gR2 database by default will set local_listener to local LISTENER, and remote_listener to SCAN listener.
7) SCAN listener will be running off GRID_HOME, and by default, in 11gR2 local listener will be running off GRID_HOME as well.

How does SCAN work ?

When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client's behalf. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node (Each scan listener keeps updated cluster load statistics) where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered.All of these actions take place transparently to the client without any explicit configuration required in the client.

How to check SCAN IP and round-robin access ?

Use nslookup. DNS is set up to provide round-robin access to the IPs resolved by the scan entry.
Run the nslookup command for 2-3 times and check round-robin algorithm works.
The result should be a set of 3 IPs in a DIFFERENT ORDER.

Start/Restart/Stop the scan listener:
1.srvctl start listener
2.srvctl start scan
3.srvctl start scan_listener

Check config :
srvctl config  listener
srvctl config  scan
srvctl config  scan_listener

Check status :
srvctl status listener
srvctl status scan
srvctl status scan_listener

Modify the scan NAME:
srvctl modify scan -n my-scan
srvctl modify scan_listener -u [ Update the scan listeners ]
srvctl config scan_listener

Modify the scan listener PORT :
srvctl modify scan_listener  -p 1522
srvctl modify scan_listener  -u [ Update the scan listeners ]
srvctl config scan_listener

Check the remote listerner and modify:
show parameter remote_listener
lsnrctl status LISTENER_SCAN1
alter system set remote_listener='RACDB_Clus_SCAN:1522'
scope=BOTH SID='*';
alter system register;
Note: Modify tnsname.ora (port=1522)

show parameter remote_listener
NAME              TYPE        VALUE
remote_listener   string      RACDB_Clus.jdata.com:1521

show parameter local_listener
NAME            TYPE        VALUE
local_listener  string  (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.88.999.111)(PORT=1521))))

Ref. Docs: [ID - 972500.1 and 887522.1]







    ORACLE RMAN - Backupset & Copy backup

    1)  Setup RMAN variables (show all):
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    RMAN auto backup for control file and spfile (if used to start the database).

    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    RMAN skips backups of archived logs and Read Only Tablespaces that have already backed up.

    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    Archivelog deletetions when you have standby databases.

    CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
    Setup parallelism for a device type specifies the number of server sessions to be used for I/O to that device type.

    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+RAC_Sharelocation/snapcf_jdb.f''; 
    RMAN creates a copy of the control file for read consistency, this is the snapshot controlfile.
    Due to the changes made to the controlfile backup mechanism in 11gR2 any instances in the cluster may write to the snapshot controlfile. Therefore, the snapshot controlfile file needs to be visible to all instances.
    In 11gR2 onwards, the controlfile backup happens without holding the control file enqueue.
    For non-RAC database, this doesn't change anything.But, for RAC database, the snapshot controlfile location must be in a shared file system that will be accessible from all the nodes.
    The snapshot controlfile MUST be accessible by all nodes of a RAC database.


    2) RMAN weekly full backup script :rman target / << EOF
    run {
    crosscheck backup;
    crosscheck archivelog all;
    allocate channel c1 type disk format '/bkup/%d/rman/rman_%d_inc0_%T_%u.bk';
    allocate channel c2 type disk format '/bkup/%d/rman/rman_%d_inc0_%T_%u.bk';
    backup incremental level=0 as [COMPRESSED] BACKUPSET database;
    sql 'alter system archive log current';
    backup current controlfile format '/bkup/%d/rman/rman_%d_ctl_%T_%u.bk';
    release channel c1;
    release channel c2;

    delete noprompt backup completed before 'SYSDATE-14';
    delete noprompt archivelog until time 'SYSDATE-2';
    }
    EOF


    2)Differential backup - FASTER BACKUP but thet take longer at recovery. It need a full backup and all differential backup for recovery:
    backup  incremental level 1 database; (default backp type is differential)

    3)Cumulative backup - FASTER RESTORE - It needs full (level 0) backup and last cumulative
    (level 1) backup for recovery:
    backup incremental level 1 CUMULATIVE as BACKUPSET database

    5) crosscheck archivelog all : RMAN will check the archivelogs whether they are physically available on disk.If the archivelogs are no longer on disk, then the status in the rman catalog and controlfile will be marked from"A" (available) to "X" (expired).
    crosscheck archivelog all;
    delete expired archivelog all;
    crosscheck backup;
    corsscheck copy;

    RMAN error messages:
    RMAN-06059: expected archived log not found, lost of archived log compromises recoverability

    6) Delete old archivelog files using rman:
    delete noprompt archivelog until time 'SYSDATE-1';

    7) LIST : use for rman repository to provide lists of backups, archived logs, and database incarnations :list backup;      
    list backupset;

    list copy;
    list expired backup;
    list incarnation of database;        









    ORACLE RMAN - Data Recovery Advisor

    Oracle 11g  RMAN - DRA (Data Recovery Advisory) :The Data Recovery Advisory is a tool that helps you to diagnose and repair data failures and corruptions. The tool can automatically repair diagnosed failures.

    How to find which files need recovery and repair automatically using rman script ?
    rman target /
    RMAN> list failure;
    RMAN> list failure [Failure ID#] detail;
    RMAN>advise failure;
    RMAN>repair failure preview;
    RMAN>repair failure; (It will generate the script for repair.Do you want to execute - Yes or No?)

    How to verification  after recovery ?
    select name, open_mode, log_mode from v$database;
    select max(sequence#) from v$archived_log;
    check alert.log

    How to detect and avoid corruption ?

    1. DBVERIFY :
    dbv file='+JDATA/JTEST/datafile/jtest.263.653950000' userid=j/pass

    2. ANALYZE....VALIDATE STRUCTURE
    analyze table table_name validate structure [cascade] [online];
    analyze index index_name validate structure;

    3.RMAN Validate:
    RMAN>backup check logical validate database;
    RMAN>backup check logical validate datafile 1;
    RMAN>validate database;
    RMAN>validate datafile 1;
    RMAN>validate tablespace users;

    4. RMAN block recovery:
    RMAN> blockrecover datafile 7 block 5;
    RMAN> blockrecover corruption list until time sysdate -1;

    5. Corruption views:

    select * from v$database_block_corruption;
    select * from v$backup_corruption;
    select * from v$copy_corruption;
     
    DB_ULTRA_SAFE:
    New parameter DB_ULTRA_SAFE=[OFF | DATA_ONLY | DATA_AND_INDEX] (default OFF) controls behavior of DB_BLOCK_CHECKING, DB_BLOCK_CHECKSUM, or DB_LOST_WRITE_PROTECT parameters in oracle 11gR2.

    List of data failure:
    1.Incorrect access permissions
    2.Offline table-space
    3.Physical corruptions - (block check-sum failures or invalid block header field)
    4.Logical corruptions - (inconsistent dictionary, corrupt row piece, corrupt index entry)
    5.Inconsistencies - (control file is older or newer than the data files and online redo logs)
    6.I/O failures - (limit on the number of open files exceeded, channels inaccessible, network or I/O error)
    Ref Doc ID - 465946.1, 762339.1 and  428570.1 for other corruption option




    ORACLE - Duplicate Database

    Here is different scenario to create duplicate databases:
    A duplicate database, which is a copy of the target database with a unique DBID. It is entirely independent of the primary database.

    A standby database does not get a new DBID. A standby database, which is a special copy of the primary database that is updated by applying archived redo logs from the primary database.

     I) Duplicate database without connecting to the target database and recovery catalog:


    1) At source take a backup and move to target server:  rman> backup database plus archivelog;
    2) Create password file and init.ora for auxiliary instance with only one parameter DB_NAME=jdup
    3) Start the instance with no mount: STARTUP NOMOUNT
    4) Connect to the auxilary instance from RMAN and perforam the rman duplicate:
    DUPLICATE DATABASE TO jdup
    UNTIL TIME "TO_DATE('01-FEB-2012 10:00:00','DD-MON-YYYY HH24:MI:SS')"
    SPFILE
    set control_files='+jdata\jdup\control\controlfile.ctl'
    set db_file_name_convert='+jdata\jdb','+jdata\jdup'
    set log_file_name_convert='+jdata\jdb','+jdata\jdup'
    BACKUP LOCATION '/u01/bkup/';

    II) Database Duplication from ACTIVE Database:

    1) No need to take backup of source database
    2) Password file MUST match source and traget database
    3) tnsname.ora should have target and source connect string
    4) DUPLICATE TARGET DATABASE TO jdb FROM ACTIVE DATABASE
        SPFILE NOFILENAMECHECK;


    III) Duplicate database using nofilenamecheck: Production and Auxiliary have same directory structure and are running on two different machines (ORACLE 10g - nofilenamecheck )

     DUPLICATE TARGET DATABASE TO testdev nofilenamecheck;
     
    IV)
    Duplicate database using nofilenamecheck and dorecover(UNTIL clause): Production and Auxiliary have same directory structure and are running on two different machines (oracle 11g - dorecover nofilenamecheck)

    (select next_change# from v$archived_log where sequence# = (select max(sequence#) from v$archived_log)
    set until scn 121212;
    DUPLICATE TARGET DATABASE TO testdev dorecover nofilenamecheck;


    V) Duplicate database using pfile :
     

    DUPLICATE TARGET DATABASE TO dupdb PFILE = <initDUPDB.ora>  NOFILENAMECHECK;

    ORACLE - Opatch : Install & De-install procedure

    Prerequisites:
    DB Version
    : 11gR1
    Utility Required : Opatch version 11.1.0.8.2 or later
    Download Location: My Oracle Support patch 6880880
    Note: Keep backup of /u01 or all binaries.

    Install Opatch 11.1.0.8.2 This needs to be done on both ASM and database homes.
    . profile_GENERIC $ORACLE_SID
    cd $ORACLE_HOME
    mv OPatch Opatch_old
    cp /dba/staging/patch/Opatch/p6880880_111000_Linux-x86-64.zip $ORACLE_HOME
    unzip p6880880_111000_Linux-x86-64.zip
    opatch lsInventory  (Verify that it is 11.1.0.8.2 )

    Download patch 11.1.0.7.11 :
     Download patch p13621679_111070_Linux-x86-64.zip  
     unzip p13621679_111070_Linux-x86-64.zip

    Patch ASM : Patch Pre-install Steps Source ASM environment . .profile_GENERIC +ASM

    Check patch conflicts
    Determine whether any currently installed one-off patches conflict with the PSU patch as follows:
    RUN:
    /u01/app/oracle/product/11.1.0/db_1/OPatch/opatch prereq
    CheckConflictAgainstOHWithDetail -phBaseDir /dba/staging/patches/PSU/April2012/13621679

    Install Steps:
    Generate a script file of all the steps executed :
    Patch needs to be applied on ASM home and each Database home :->
    If you are using a Data Guard Physical Standby database, you must install this patch on both the primary database and the physical standby database, as described by My Oracle Support Note 278641.1.

    Shutdown all databases and listeners including ASM
    /dba/maestro/db_stop_all.sh
    ps -ef | grep pmon
    ps -ef | grep tns
    ps -ef | grep oracle
    $ORA_PERL/rac/ocr_resources.sh

    Apply patch to ASM -Make sure the New Opatch is vailable in the ASM ORACLE_HOME
    cd /dba/staging/patches/PSU/April2012/13621679
    echo $ORACLE_HOME  ( Verify the home is correctly set to ASM)
    opatch lsinventory
    opatch apply
    opatch lsinventory (Verify that the PSU is listed in the output)

    Start all databases and listeners including ASM
    /dba/maestro/db_start_all.sh
    ps -ef | grep pmon
    ps -ef | grep tns
    ps -ef | grep oracle  ( Make sure every thing has been started)
    $ORA_PERL/RAC/ocr_resources.sh

    Patch DATABASE
    Patch Pre-install Steps

    Check patch conflicts
    Determine whether any currently installed one-off patches conflict with the PSU patch as follows:
    RUN:

    /u01/app/oracle/product/11.1.0/db_1/OPatch/opatch prereq
    CheckConflictAgainstOHWithDetail -phBaseDir /dba/staging/patches/PSU/April2012/13621679

    Check invalid objects:

    Check invalid objects on all databases to which the PSU is being applied and save them for post install compare.

    Check services online/offline
    Check services on all databases to which the PSU is being applied and save them for post install compare
    $ORA_PERL/rac/ocr_resources.sh

    Install Steps Apply the patch to each database home(DB_1,DB_2,DB_3,DB_4 etc) after sourcing the appropriate database profile.
    . .profile_GENERIC <ORACLE_INSTANCE>
    cd /home/oracle
    echo $ORACLE_HOME ( Verify the home is correct)
    /dba/maestro/stop<db_1/2/3/4>.sh
    cd /dba/staging/patches/PSU/April2012/13621679
    opatch lsinventory
    opatch apply
    opatch lsinventory ( Verify that the PSU is listed in the output)
     
    Patch Post-install Steps
    Post Installation steps need to be run for each database using oracle home that is being patched except ASM

    1. Apply the patch conflict resolution one-off patches that were determined to be needed when you performed the steps in Pre-install one-off-conflict step.

    2. The following steps load modified SQL files into the database. For a RAC environment, perform these steps on only one node.

    Run catbundle.sql

    . .profile_GENERIC <ORACLE_SID>
    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    CONNECT / AS SYSDBA
    startup
    @$ORACLE_HOME/rdbms/admin/catbundle.sql psu apply
    shutdown immediate

    Check the following log files in $ORACLE_HOME/cfgtoollogs/catbundle for any errors :
    cd $ORACLE_HOME/cfgtoollogs/catbundle
    grep ORA- catbundle_PSU_<ORACLE_SID>_APPLY_<TIMESTAMP>.log
    grep ORA- catbundle_PSU_<ORACLE_SIDS>_GENERATE_<TIMESTAMP>.log

    Check invalid objects:

    Start Database and listeners :
    Startup all the databases after completing the catbundle script is run
    cd /home/oracle
    . .profile_GENERIC <ORACLE_SID>
    echo $ORACLE_HOME <--- Verify the home is correct
    /dba/maestro/start<db_1/2/3/4>.sh

    Run Opatch :
    1. Run opatch lsinventory on both nodes and on DR also to verify the patch applied. $ORACLE_HOME/OPatch/opatch lsinventory
    Review Script Log :
    2. Review the script log for any error during patch application.
    view /dba/staging/patches/PSU/April2012/script.log

    Patch De-install Steps:
    1. Follow these steps for each node in the cluster, one node at a time:
    2. Shut down the instance on the node. (Shut down all RDBMS instances before any ASM instances.)
    3. Run the OPatch utility specifying the rollback argument as follows.

    opatch rollback -id 10248531

    4. Start the instance on the node. Depending on the type of home, enter the following commands:

    /dba/maestro/stop<db_1/2/3/4>.sh

    5. From on the node where post-install step 2 was executed do the following.

    Start all database instances running from the Oracle home.
    For each database instance running out of the ORACLE_HOME, connect to the database using SQL*Plus as SYSDBA and run the rollback script as follows:
    cd $ORACLE_HOME/rdbms/admin
    sqlplus /nolog
    SQL> CONNECT / AS SYSDBA
    SQL> STARTUP
    SQL> @catbundle_PSU_<database SID>_ROLLBACK.sql
    SQL> QUIT
    In a RAC environment, the name of the rollback script will have the format catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql.
    Check the log file for any errors. The log file is found in $ORACLE_HOME/cfgtoollogs/catbundle and is named catbundle_PSU_<database SID>_ROLLBACK_<TIMESTAMP>.log.

    Monday, January 14, 2013

    ORACLE - Standby (ACTIVE) database

    Physical Standby Database:
    I) Create Standby Database using  DUPLICATE DATABASE :
    At Primary database (JDB):
    1) Enable force logging (Priamry and Standby)
    2) Create password file and Add connect string to tnsnames.ora/listener.ora file (Priamry and Standby)
    3) Create standby redo logs (Priamry and Standby)
    ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 SIZE 200M;
    4) Modify Primary pfile or spfile:
    alter system set log_archive_dest_1='location="USE_DB_RECOVERY_FILE_DEST",
    valid_for=(ALL_LOGFILES,ALL_ROLES)' scope = BOTH;
    alter system set log_archive_dest_2='service=JDR LGWR ASYNC NOAFFIRM
    db_unique_name="JDR" valid_for=(ONLINE_LOGFILES,PRIMARY_ROLES)' scope=BOTH;
    create pfile='/dba/hibeprd.pfile' from spfile;
    5) Take Primary Database backup and move to standby server :
    create pfile from spfile;
    backup as backupset device type disk database ;
    backup as backupset device type current controlfile for standby;


    At standby database (JDR):
    6)Modify pfile for standby :
    *.db_name='JDB'
    *.log_archive_dest_1='location="USE_DB_RECOVERY_FILE_DEST",
    valid_for=(ALL_LOGFILES,ALL_ROLES)'
    *.control_files='+JDATA/jdr/controlfile/current'

    Add to standby init.ora:
    *.db_unique_name='JDR'
    *.standby_file_management='AUTO'
    *.db_file_name_convert='+JDATA/JDB/','+JDATA/JDR/'
    *.log_file_name_convert='+JDATA/JDB/','+JDATA/JDR/'
    *.fal_client='(DESCRIPTION for JDR database)'
    *.fal_server='(DESCRIPTION for JDB database)'
    *.log_archive_config='dg_config=(JDR,JDB)'
    *.service_names='JDB','JDR'
    *.dg_broker_start=TRUE
    Note: commnet following 2 parameter if it is RAC:
    #*.cluster_database_instances=2
    #*.cluster_database=true

    startup nomount;
    rman target sys/password@JDB auxiliary / <<EOF
    duplicate target database for standby NOFILENAMECHECK;
    exit;
    EOF

    6) Primary and Standby database: Enable dataguard both
    alter system set DG_BROKER_START=true scope=both;
    Note: For DATA GUARD BROKER SETUP - See my notes on Data Guard

    II) Create standby database using Duplicate...ACTIVE DATABASE:
    FOR STANDBY – the duplicate is for use as a standby so a DBID change will not be forced.
    FROM ACTIVE DATABASE – instructs RMAN to use the active target database instead of disk based backups.
    DORECOVER – do recovery bringing the standby database up to the current point in time.
    SPFILE – values for parameters specific to the auxiliary instance can be set here.
    NOFILENAMECHECK – this option is added because the duplicate database files uses the same name as the source database

    It will create standby database over the network using active(primary) database files.

    duplicate target database for standby from active database
    spfile
    set db_unique_name='jdb_stdby'
    set fal_client and fal_server
    set standby_file_management='AUTO'
    set log_archive_config and log_archive_dest_2;

    III) Start MRP:
    alter database recover managed standby database disconnect from session;

    IV) Active Dataguard (need oracle license) : Open the Standby Database in READ ONLY
    and start the recovery:
    alter database recover managed standby database cancel;
    alter database open;
    alter database recover managed standby database disconnect;

    Sunday, January 13, 2013

    ORACLE - Exadata

    EXADATA X2 - has 4 configuration with High Performance or High capacity options:
    1) Exadata X2-2 Quarter Rack
    2) Exadata X2-2 Half Rack
    3) Exadata X2-2 Full Rack
    4) Exadata X2-8 Full Rack

    Smart Scan is only possible for Full Table Scans or Full Index Scans.Smart Scan can only be used for Direct-Path Reads:Direct-path reads are automatically used for parallel queries.Direct-path reads may also be used for serial queries.
    1. Client issue a SELECT statement for few rows and columns
    2. DB KERNEL check - EXADATA available - construct a iDB command
    3. EXADATA SERVER Software scan the data block to extract rows and columns
    4. This is not a NLOCK IMAGE so they are not store it BUFFER CACHE
    5. DB KERNEL collect result sets from all cells
    6. row are return to Client (using PGA)

    Predicate filtering:Only the requested rows are returned to the database server rather than all the rows in a table.
    Column filtering:Only the requested columns are returned to the database server rather than all the columns in a table.
    A compression unit is a logical structure spanning multiple database blocks.
    Each row is self-contained within a compression unit.Data is organized by column during data load.Each column is compressed separately.Smart Scan is supported. A storage index is a memory-based structure that reduces the amount of physical IO required by the cell. A storage index keeps track of minimum and maximum values from query predicates and builds storage index regions based on usage.
    The storage space inside each cell is logically divided into 1Mb chunks called "storage regions"
    In each storage region, data distribution statistics are maintained in a memory structure called a region index.
    Each region index contains distribution information for up to 8 columns.
    The storage index is a collection of region indexes.
    Fast Mirror Resync: During transient disk failures within a failure group, ASM keeps track of the changed extents that need to be applied to the offline disk. Once the disk is available, only the changed extents are written to resynchronize the disk, rather than overwriting the contents of the entire disk. This can speed up the resynchronization process considerably.
    Order of Patches:1.Infinibands Switchs patch: Spine and Leaf
    2.Storage Server patch : Cell nodes and Database Minimal Pack in compute nodes
    3.Grid home (CRS ASM) and Oracle Home Patch
     
    XDMG and XDWK background processes/opt/oracle.SupportTools
    /etc/oracle/cell/network-config/ceillip.ora and cellinit.ora
    IOPS - Disk Bandwidth : OLTP - small random I/OMBPS - Channel Bandwidth: DW - Large sequential I/O


    Exadata Log files:

    On the cell nodes:
    1. Cell alert.log : $CELLTRACE:/opt/oracle/cell/log/diag/asm/cell/{node name}/trace/alert.log
     2. MS logfile : $CELLTRACE:/opt/oracle/cell/log/diag/asm/cell/{node name}/trace/ms-odl.log.
    3. OS watcher output data: /opt/oracle.oswatcher/osw/archive/
    4. VM Core files :/var/log/oracle/crashfiles
    5. SunDiag output files :/tmp/sundiag_.tar.bz2
    7. Cell patching issues related logfiles: /var/log/cellos
    The major logfile of patch application output you will find in the db node from where you are patching in the location /tmp/<cell version>/patchmgr.stdout and patchmgr.err
    8. Disk controller firmware logs:  /opt/MegaRAID/MegaCli/MegaCli64 -fwtermlog -dsply -a0

    On the Database nodes:
    1. Database alert.log : $ORACLE_BASE/diag/rdbms/{DBNAME}/{sid}/trace/alert_{sid}.log
    2. ASM alert.log : $ORACLE_BASE/diag/asm/+asm/+ASM{instance number}/trace/ alert_+ASM {instance number}.log
    3. Clusterware CRS alert.log : $GRID_HOME/log/{node name}/alert{node name}.log
    4. Diskmon logfiles : $GRID_HOME/log/{node name}/diskmon/diskmon.lo*
    5. VM Core files for Linux : /u01/crashfiles

     

    ORACLE - RAC 10gR2/11gR2

    How to delete CRS HOME ?
    We have a automated way of cleaning up CRS as show below
    1) Run rootdelete.sh script present in $ORA_CRS_HOME/install directory
    2) Run rootdeinstall.sh script present in $ORA_CRS_HOME/install directory
    3) Manually clean up the CRS :
    1. Remove following files as root user on ALL NODES IN CLUSTER
    rm /etc/oracle/*
    rm -f /etc/init.d/init.cssd
    rm -f /etc/init.d/init.crs
    rm -f /etc/init.d/init.crsd
    rm -f /etc/init.d/init.evmd
    rm -f /etc/rc2.d/K96init.crs
    rm -f /etc/rc2.d/S96init.crs
    rm -f /etc/rc3.d/K96init.crs
    rm -f /etc/rc3.d/S96init.crs
    rm -f /etc/rc5.d/K96init.crs
    rm -f /etc/rc5.d/S96init.crs
    rm -Rf /etc/oracle/scls_scr
    rm -f /etc/inittab.crs
    cp /etc/inittab.orig /etc/inittab

    2.If they are not already down then kill off EVM, CRS and CSS process as root user on ALL NODES
    ps -ef | grep crs   
    kill <crs pid>
    ps -ef | grep evm
    kill <evm pid>
    ps -ef | grep css   
    kill <css pid>

    3.Remove the files in /var/tmp/.oracle or /tmp/.oracle on ALL NODES
    rm -f /var/tmp/.oracle/*
    or
    rm -f /tmp/.oracle/*

    4.Remove the ocr.loc at /etc/oracle on ALL NODES
    5.De-install the CRS home in the Oracle Universal Installer
    6.Remove the CRS install location on ALL NODES
    In case you have not removed the CRS install directory, please do so.
    7.Remove OCR and Voting disk from shared location from any 1 node

    How to drop db and Home HOME ?
    DROP Database  or Use DBCA:
    shutdown abort;
    startup mount exclusive restrict;
    drop database;
    exit

    Use runInstaller to remove db and ASM home
    Hope this helps !!

    ORACLE - ASM

    Step 1: Down load rpm and installed at all node if RAC : (as a root)
    For Unbreakable
    rpm -Uvh oracleasm-support-2.1.4-1.el5.x86_64.rpm \
             oracleasmlib-2.0.4-1.el5.x86_64.rpm \
             oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm
    warning: oracleasm-support-2.1.3-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
    Preparing...                ########################################### [100%]
       1:oracleasm-support      ########################################### [ 33%]
       2:oracleasm-2.6.18-128.el########################################### [ 67%]
       3:oracleasmlib           ########################################### [100%]

    Step 2 - configure asm: (both node if RAC): /etc/init.d/oracleasm configure :(as a root)

    Configuring the Oracle ASM library driver.
    This will configure the on-boot properties of the Oracle ASM library
    driver.  The following questions will determine whether the driver is
    loaded on boot and what permissions it will have.  The current values
    will be shown in brackets ('[]').  Hitting <ENTER> without typing an
    answer will keep that current value.  Ctrl-C will abort.

    Default user to own the driver interface []: oracle
    Default group to own the driver interface []: dba
    Start Oracle ASM library driver on boot (y/n) [n]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver:  [  OK  ]
    Scanning the system for Oracle ASMLib disks:  [  OK  ]

    3) Check asm status :
    /etc/init.d/oracleasm status
    /etc/init.d/oracleasm listdisks
    /etc/init.d/oracleasm enable

    4) Create ASM disks:
    # /etc/init.d/oracleasm createdisk DISK01 /dev/dm01
    Marking disk "/dev/dm01" as an ASM disk:                   [  OK  ]
    # /etc/init.d/oracleasm createdisk DISK02 /dev/dm02
    Marking disk "/dev/dm02" as an ASM disk:                   [  OK  ]

    5) Scan disk on other nodes:
    /etc/init.d/oracleasm scandisks

    6) Create disk group, add and drop:
    CREATE DISKGROUP jdata01 EXTERNAL REDUNDANCY DISK 'ORCL:DATA01';
    ALTER DISKGROUP jdata01 ADD DISK 'ORCL:DISK02';
    ALTER DISKGROUP jdata01 DROP DISK DISK02;

    7) Verification:
    1) select group_number,name,state,total_mb,free_mb, usable_file_mb, type from v$asm_diskgroup GROUP_NUMBER NAME   STATE         TOTAL_MB    FREE_MB USABLE_FILE_MB TYPE
    ------------ ------------------------------ ----------- ---------- ---------- --------------------------------------
               3 JDATA01      MOUNTED       102398      95565          95565             EXTERN
     

    2) select * from v$asm_operation;
    GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE
    ------------ ----- ---- ---------- ---------- ---------- ---------- ---------- ----------- -----------
               1 REBAL RUN           1          1      34578      75081       2498          16

    8) Configuring the scan order: Oracle ASMLib configuration file is located at /etc/sysconfig/oracleasm.
    # This is a configuration file for automatic loading of the Oracle
    # Automatic Storage Management library kernel driver.  It is generated
    # By running /etc/init.d/oracleasm configure.  Please use that method
    # to modify this file
    #

    # ORACLEASM_ENABELED: 'true' means to load the driver on boot.
    ORACLEASM_ENABLED=true

    # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
    ORACLEASM_UID=oracle

    # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
    ORACLEASM_GID=dba

    # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
    ORACLEASM_SCANBOOT=true

    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER=""

    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE=""

    Saturday, January 12, 2013

    ORACLE - Flashback database

    Flashback database is a new feature as of Oracle 10g that can be used to quickly flashback a database to a point in time.

    Advantage of Flashback features:
    1)Point-in-time recovery – to quickly rewind the database to point-in-time by using restore points.
    2)Data Guard Fast-start fail-over - to quickly reinstate the primary database
    3)Manual primary database reinstate
    4)11g Data Guard Snapshot Standby
    5)Restoring Test and Training Databases
    6)RMAN -One 11g example of RMAN implicit use of flashback database is when block media recovery is performed.

    How to configure flashback database ?
    DB_FLASHBACK_RETENTION_TARGET : Determines how far back in time you can recover the flashback database (default 1440 )

    How to delete flashback logs ?
    1) Verify the flashback logs:
    SELECT file_type, trunc(space_used*percent_space_used/100/1024/1024/1024) used,
    trunc(space_reclaimable*percent_space_reclaimable/100/1024/1024/1024) reclaimable,
    B.number_of_files FROM v$recovery_file_dest A, v$flash_recovery_area_usage B;

    FILE_TYPE          USED RECLAIMABLE NUMBER_OF_FILES
    ------------ ---------- ----------- ---------------
    CONTROLFILE            0               0                 0
    ONLINELOG                1               0                47
    ARCHIVELOG           39               0            2881   => Check Archive log
    BACKUPPIECE            0               0                43   => Check RMAN backup
    IMAGECOPY                0               0                 0
    FLASHBACKLOG         122          0            12582   => Check Flashback log

    2) Strop the instance or database:
    srvctl stop instance -d DBNAME -i InstanceName1
    srvctl stop instance -d DBNAME -i InstanceName2

    3) Drop the flashback logs:
    alter database flashback off;

    4) Enable flashback logs:
    alter database flashback on;

    5) Start the instance or database:
    srvctl start instance -d DBNAME -i InstanceName1
    srvctl stop instance  -d DBNAME -i InstanceName2