Downloads available on OTN
Wednesday, June 26, 2013
Wednesday, November 14, 2012
Is SQL cached when used in a trigger?
I would start by mentioning that the case below is not my own, I just tried to reproduce an example
from Tom Kyte's book - Effective Oracle By Design. Thanks Tom for the great material!
The results show that the implementation of using SQL in triggers does not cause excessive parsing in 11g.
In previous releases, the SQL statements used in a trigger were cached only for the duration of the call to the server; each separate call to the server would have caused the statements to be soft parsed over and over.
We will do the setup by using the emp table and a summary table of counts of employees by deptno.
The counts will be maintained by a trigger after INSERT, UPDATE, DELETE on the emp tale.
SQL> select banner from v$version where rownum <= 1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
SQL> create table emp as
2 select ename, empno, deptno
3 from scott.emp;
Table created.
SQL> create table emp_dept_cnt
2 (deptno primary key, cnt)
3 organization index
4 as
5 select deptno, count(*)
6 from emp
7 group by
8 deptno;
Table created.
SQL> create trigger emp_dept_cnt_trigger
2 after insert or update or delete on emp
3 for each row
4 begin
5 if (inserting or updating)
6 then
7 merge into emp_dept_cnt in_trigger
8 using (select :new.deptno deptno from dual) n
9 on (in_trigger.deptno = n.deptno)
10 when matched then
11 update set cnt=cnt+1
12 when not matched then
13 insert (deptno, cnt) values (:new.deptno,1);
14 end if;
15 if (updating or deleting)
16 then
17 update emp_dept_cnt in_trigger
18 set cnt = cnt-1
19 where deptno = :old.deptno;
20 end if;
21 end;
22 /
Trigger created.
Now we are tracing the activity that will fire the trigger.
SQL> alter session set sql_trace=true;
Session altered.
SQL> insert into emp (ename,empno,deptno)
2 values ('john',123,10);
1 row created.
SQL> insert into emp (ename,empno,deptno)
2 values ('mike',123,10);
1 row created.
SQL> delete from emp;
16 rows deleted.
In the trace file produced, we are interested to see the parse count for the MERGE statement and for the UPDATE.
Since we had two insert operations, we observe that the SQL statements are cached, so we will have only one parse call, and two executions.
---
MERGE INTO EMP_DEPT_CNT IN_TRIGGER USING
(SELECT :B1 DEPTNO FROM DUAL) N ON
(IN_TRIGGER.DEPTNO = N.DEPTNO) WHEN MATCHED THEN UPDATE SET CNT=CNT+1 WHEN
NOT MATCHED THEN INSERT (DEPTNO, CNT) VALUES (:B1 ,1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.01 0.00 0 15 10 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.00 0 15 10 2
UPDATE EMP_DEPT_CNT IN_TRIGGER SET CNT = CNT-1
WHERE
DEPTNO = :B1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 16 16 16
---
In previous releases, this would have been possible only by packaging the SQL and invoke the packed procedures in the trigger.
SQL> create or replace package emp_dept_cnt_pkg
2 as
3 procedure insert_update(p_deptno in number);
4 procedure update_delete(p_deptno in number);
5 end;
6 /
Package created.
SQL> create or replace package body emp_dept_cnt_pkg
2 as
3 procedure insert_update (p_deptno in number)
4 as
5 begin
6 merge into emp_dept_cnt in_package
7 using (select p_deptno deptno from dual) n
8 on (in_package.deptno = n.deptno)
9 when matched then
10 update set cnt=cnt+1
11 when not matched then
12 insert (deptno, cnt) values (p_deptno,1);
13 end;
14 procedure update_delete (p_deptno in number)
15 as
16 begin
17 update emp_dept_cnt in_package
18 set cnt = cnt-1
19 where deptno = p_deptno;
20 end;
21 end;
22 /
Package body created.
The trigger will now be build as it follows:
SQL> create or replace trigger emp_dept_cnt_trigger
2 after insert or update or delete on emp
3 for each row
4 begin
5 if (inserting or updating)
6 then
7 emp_dept_cnt_pkg.insert_update (:new.deptno);
8 end if;
9 if (updating or deleting)
10 then
11 emp_dept_cnt_pkg.update_delete (:old.deptno);
12 end if;
13 end;
14 /
Trigger created.
SQL> alter session set sql_trace=true;
Session altered.
SQL> insert into emp (ename,empno,deptno)
2 values ('bob',124,10);
1 row created.
SQL> insert into emp (ename,empno,deptno)
2 values ('ken',124,10);
1 row created.
SQL> delete from emp;
2 rows deleted.
SQL> alter session set sql_trace=false;
Session altered.
-----
The trace shows the parse count is one, our sql is cached.
MERGE INTO EMP_DEPT_CNT IN_PACKAGE USING
(SELECT :B1 DEPTNO FROM DUAL) N ON
(IN_PACKAGE.DEPTNO = N.DEPTNO) WHEN MATCHED THEN UPDATE SET CNT=CNT+1 WHEN
NOT MATCHED THEN INSERT (DEPTNO, CNT) VALUES (:B1 ,1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.01 0.01 0 11 6 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.01 0 11 6 2
from Tom Kyte's book - Effective Oracle By Design. Thanks Tom for the great material!
The results show that the implementation of using SQL in triggers does not cause excessive parsing in 11g.
In previous releases, the SQL statements used in a trigger were cached only for the duration of the call to the server; each separate call to the server would have caused the statements to be soft parsed over and over.
We will do the setup by using the emp table and a summary table of counts of employees by deptno.
The counts will be maintained by a trigger after INSERT, UPDATE, DELETE on the emp tale.
SQL> select banner from v$version where rownum <= 1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
SQL> create table emp as
2 select ename, empno, deptno
3 from scott.emp;
Table created.
SQL> create table emp_dept_cnt
2 (deptno primary key, cnt)
3 organization index
4 as
5 select deptno, count(*)
6 from emp
7 group by
8 deptno;
Table created.
SQL> create trigger emp_dept_cnt_trigger
2 after insert or update or delete on emp
3 for each row
4 begin
5 if (inserting or updating)
6 then
7 merge into emp_dept_cnt in_trigger
8 using (select :new.deptno deptno from dual) n
9 on (in_trigger.deptno = n.deptno)
10 when matched then
11 update set cnt=cnt+1
12 when not matched then
13 insert (deptno, cnt) values (:new.deptno,1);
14 end if;
15 if (updating or deleting)
16 then
17 update emp_dept_cnt in_trigger
18 set cnt = cnt-1
19 where deptno = :old.deptno;
20 end if;
21 end;
22 /
Trigger created.
Now we are tracing the activity that will fire the trigger.
SQL> alter session set sql_trace=true;
Session altered.
SQL> insert into emp (ename,empno,deptno)
2 values ('john',123,10);
1 row created.
SQL> insert into emp (ename,empno,deptno)
2 values ('mike',123,10);
1 row created.
SQL> delete from emp;
16 rows deleted.
In the trace file produced, we are interested to see the parse count for the MERGE statement and for the UPDATE.
Since we had two insert operations, we observe that the SQL statements are cached, so we will have only one parse call, and two executions.
---
MERGE INTO EMP_DEPT_CNT IN_TRIGGER USING
(SELECT :B1 DEPTNO FROM DUAL) N ON
(IN_TRIGGER.DEPTNO = N.DEPTNO) WHEN MATCHED THEN UPDATE SET CNT=CNT+1 WHEN
NOT MATCHED THEN INSERT (DEPTNO, CNT) VALUES (:B1 ,1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.01 0.00 0 15 10 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.00 0 15 10 2
UPDATE EMP_DEPT_CNT IN_TRIGGER SET CNT = CNT-1
WHERE
DEPTNO = :B1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 16 0.00 0.00 0 16 16 16
---
In previous releases, this would have been possible only by packaging the SQL and invoke the packed procedures in the trigger.
SQL> create or replace package emp_dept_cnt_pkg
2 as
3 procedure insert_update(p_deptno in number);
4 procedure update_delete(p_deptno in number);
5 end;
6 /
Package created.
SQL> create or replace package body emp_dept_cnt_pkg
2 as
3 procedure insert_update (p_deptno in number)
4 as
5 begin
6 merge into emp_dept_cnt in_package
7 using (select p_deptno deptno from dual) n
8 on (in_package.deptno = n.deptno)
9 when matched then
10 update set cnt=cnt+1
11 when not matched then
12 insert (deptno, cnt) values (p_deptno,1);
13 end;
14 procedure update_delete (p_deptno in number)
15 as
16 begin
17 update emp_dept_cnt in_package
18 set cnt = cnt-1
19 where deptno = p_deptno;
20 end;
21 end;
22 /
Package body created.
The trigger will now be build as it follows:
SQL> create or replace trigger emp_dept_cnt_trigger
2 after insert or update or delete on emp
3 for each row
4 begin
5 if (inserting or updating)
6 then
7 emp_dept_cnt_pkg.insert_update (:new.deptno);
8 end if;
9 if (updating or deleting)
10 then
11 emp_dept_cnt_pkg.update_delete (:old.deptno);
12 end if;
13 end;
14 /
Trigger created.
SQL> alter session set sql_trace=true;
Session altered.
SQL> insert into emp (ename,empno,deptno)
2 values ('bob',124,10);
1 row created.
SQL> insert into emp (ename,empno,deptno)
2 values ('ken',124,10);
1 row created.
SQL> delete from emp;
2 rows deleted.
SQL> alter session set sql_trace=false;
Session altered.
-----
The trace shows the parse count is one, our sql is cached.
MERGE INTO EMP_DEPT_CNT IN_PACKAGE USING
(SELECT :B1 DEPTNO FROM DUAL) N ON
(IN_PACKAGE.DEPTNO = N.DEPTNO) WHEN MATCHED THEN UPDATE SET CNT=CNT+1 WHEN
NOT MATCHED THEN INSERT (DEPTNO, CNT) VALUES (:B1 ,1)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.01 0.01 0 11 6 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.01 0 11 6 2
Tuesday, October 30, 2012
Migrating subset of database from singlebyte WE8MSWIN1252 to multibyte UTF8
This is a small showcase for exporting a small subset of a single byte character set database and importing to multibyte character set.
This showcase does not make use of Csscan tool.
This tool analyzes the code points in the database and verifies that they are known in the source the database character set and known in the target database character set.
The output of the Csscan utility is vital to determining the health of the data in the database, and this becomes especially crucial when changing character sets, as well as during import and export operations or database link transfers when source and target database differ in character set.
We will achieve this by setting the environment variable NLS_LANG to our source character set WE8MSWIN1252, when doing both the import and the export.
On the source db we create a demo table and populate it with 100,000 rows:
C:\Users\BERCA>set ORACLE_HOME=Q:\app\product\11.2.0\dbhome_1
C:\Users\BERCA>set ORACLE_SID=HORIA2
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> set linesize 300
SQL> set pagesize 80
SQL> select * from nls_database_parameters
2 where parameter='NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_CHARACTERSET WE8MSWIN1252
SQL> drop table v1;
Table dropped.
SQL> drop sequence seq;
Sequence dropped.
SQL> create table v1 (id number(9), col varchar2(144));
Table created.
SQL> create sequence seq start with 100 maxvalue 1000000000;
Sequence created.
SQL> set timing on;
SQL> begin
2 for i in 1..100000 loop
3 execute immediate 'insert into v1 (id,col) values
4 (seq.nextval,''1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz1234567890abcdefgh
ijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz'')';
5 end loop
6 commit;
7 end;
8 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:04.76
SQL> select count(*) from v1;
COUNT(*)
----------
100000
Elapsed: 00:00:00.14
SQL> set timing off;
SQL> select sum(bytes) from user_segments where segment_name='V1';
SUM(BYTES)
----------
18874368
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
C:\Users\BERCA>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
C:\Users\BERCA>expdp horia/horia DIRECTORY=temp DUMPFILE=v1_exp.dmp TABLES=v1
Export: Release 11.2.0.3.0 - Production on Tue Oct 30 14:52:31 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HORIA"."SYS_EXPORT_TABLE_01": horia/******** DIRECTORY=temp DUMPFILE=v1_exp.dmp TABLES=v1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 18 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "HORIA"."V1" 14.68 MB 100000 rows
Master table "HORIA"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HORIA.SYS_EXPORT_TABLE_01 is:
Q:\APP\TEMP\V1_EXP.DMP
Job "HORIA"."SYS_EXPORT_TABLE_01" successfully completed at 14:52:41
On the target, where we have NLS_CHARACTERSET = AL32UTF8, we alse set the variable NLS_LANG to WE8MSWIN1252:
C:\Users\BERCA>set ORACLE_HOME=Q:\app\product\11.2.0\dbhome_1
C:\Users\BERCA>set ORACLE_SID=HORIA
C:\Users\BERCA>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> set linesize 300
SQL> set pagesize 80
SQL> select * from nls_database_parameters
2 where parameter='NLS_CHARACTERSET';
PARAMETER VALUE
------------------------------ ----------------------------------------------------------------------------
---------------------------------------------------------------------------------
NLS_CHARACTERSET AL32UTF8
C:\Users\BERCA>impdp horia/horia DIRECTORY=temp DUMPFILE=v1_exp.dmp
Import: Release 11.2.0.3.0 - Production on Tue Oct 30 14:55:35 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "HORIA"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "HORIA"."SYS_IMPORT_FULL_01": horia/******** DIRECTORY=temp DUMPFILE=v1_exp.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "HORIA"."V1" 14.68 MB 100000 rows
Job "HORIA"."SYS_IMPORT_FULL_01" successfully completed at 14:55:39
SQL> select sum(bytes) from user_segments where segment_name='V1';
SUM(BYTES)
----------
18874368
An observation we cam make is that the size of the table remains the same.
How is this possible?
US7ASCII characters (A-Z,a-Z,0-1 and ./?,*# etc..) are in AL32UTF8 always 1 byte, so for most West European languages the impact is rather limited as only "special" characters like accented e a etc will use more bytes than in an 8 bit character set.
When converting a Cyrillic or Arabic system to AL32UTF8 then all the Cyrillic or Arabian data will take considerable more bytes to store.
You may find useful to research these notes as a reference for your projects:
AL32UTF8 / UTF8 (Unicode) Database Character Set Implications [ID 788156.1]
B.2) How much will my database grow when going to AL32UTF8?
B.5) Going to AL32UTF8 from another characterset.
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) [ID 260192.1]
Csscan output explained [ID 444701.1]
Storing and Checking Character Codepoints in a UTF8/AL32UTF8 (Unicode) database [69518.1]
ORA-01401 / ORA-12899 While Importing Data In An AL32UTF8 / UTF8 (Unicode) Or Other Multibyte NLS_CHARACTERSET Database. [ID 1297961.1]
Friday, August 31, 2012
Infamous _X11TransSocketINETConnect() can't get address for localhost
I use the Xming as an
Xserver
In
Windows, look for the shortcut to start Xming.
Right click to go to
properties.
add
-ac to your XMing shortcut:
"C:\Program
Files\Xming\Xming.exe" :0 -clipboard -multiwindow -ac
The -ac parameter
allows everyone to connect, like ‘xhost +’
Start
Xming
Open
putty -> connection -> SSH -> check Enable X11 forwarding -> put
the actual hostname:0.0
go
back to session and login.
export
DISPLAY=HOSTNAME:0.0
You
can check in /etc/ssh/ssh_config if X11 forwarding is enabled
Thursday, August 9, 2012
Check your alert log when you get ORA-03113 at startup
You're a happy DBA wanting to start your Oracle DB Server.
You're running on Windows and you created a bat to startup your listener and DB. I start both the listener and the instance in one script, I use oradim for starting up the instance. That's not the fun part.
You try to connect and you get ORA-03113: end-of-file on communication channel.
Because you think you're smart, you start to troubleshoot. You say this cannot be a communication issue, and cannot have a problem between client and server process.
If someone did not get it, it was me in the middle of the issue.
I shut down the services and go to my friend SQL*Plus, as I trust him the most.
I take a step-by-step approach:
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 1670221824 bytes
Fixed Size 2176328 bytes
Variable Size 1526729400 bytes
Database Buffers 134217728 bytes
Redo Buffers 7098368 bytes
SQL> alter database mount;
Database altered.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 horia
OTPL1110003
11.2.0.1.0 08-AUG-12 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 8868
Session ID: 2826 Serial number: 1
I have overlooked at this for a couple of 10 minutes, and then it hit me to check my best friend forever, the alert log file.
I quickly find out that I am running out of space in the recovery destination.
I do the cleanup and am able to bring my database up.
Two things to remember here:
1) ORA-03113 can be misleading, it is a "a symptom of an issue happening in behind"
2) Alert log is your most valuable point to do your routine checks. A health system has a clean alert log file. And that is the DBA's role to maintain.
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 4102029312 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 39329280 bytes disk space from 4102029312 limit
ARCH: Error 19809 Creating archive log file to 'D:\APP\HBERCA\FLASH_RECOVERY_AREA\HORIA\ARCHIVELOG\2012_08_08\O1_MF_1_92_%U_.ARC'
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-16038: log 2 sequence# 92 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: 'D:\APP\HBERCA\ORADATA\HORIA\REDO02.LOG'
USER (ospid: 8868): terminating the instance due to error 16038
Wed Aug 08 20:58:38 2012
---------------------------------------------------
---------------------------------------------------
There is yet another trick you can do.
If you are running your db with spfile, it's obvious that you cannot edit it as it is a binary file.
It is known that Oracle uses the last specified value for a parameter.
We build a pfile with this entry:
spfile=D:\app\hberca\product\11.2.0\dbhome_1\database\SPFILEHORIA.ORA
db_recovery_file_dest_size= 5912M
And then startup the database with the pfile we just created
SQL> startup pfile=pfileHORIA.ora
ORACLE instance started.
Total System Global Area 1670221824 bytes
Fixed Size 2176328 bytes
Variable Size 1526729400 bytes
Database Buffers 134217728 bytes
Redo Buffers 7098368 bytes
Database mounted.
Database opened.
SQL>
You're running on Windows and you created a bat to startup your listener and DB. I start both the listener and the instance in one script, I use oradim for starting up the instance. That's not the fun part.
You try to connect and you get ORA-03113: end-of-file on communication channel.
Because you think you're smart, you start to troubleshoot. You say this cannot be a communication issue, and cannot have a problem between client and server process.
If someone did not get it, it was me in the middle of the issue.
I shut down the services and go to my friend SQL*Plus, as I trust him the most.
I take a step-by-step approach:
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 1670221824 bytes
Fixed Size 2176328 bytes
Variable Size 1526729400 bytes
Database Buffers 134217728 bytes
Redo Buffers 7098368 bytes
SQL> alter database mount;
Database altered.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 horia
OTPL1110003
11.2.0.1.0 08-AUG-12 MOUNTED NO 1 STARTED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 8868
Session ID: 2826 Serial number: 1
I have overlooked at this for a couple of 10 minutes, and then it hit me to check my best friend forever, the alert log file.
I quickly find out that I am running out of space in the recovery destination.
I do the cleanup and am able to bring my database up.
Two things to remember here:
1) ORA-03113 can be misleading, it is a "a symptom of an issue happening in behind"
2) Alert log is your most valuable point to do your routine checks. A health system has a clean alert log file. And that is the DBA's role to maintain.
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 4102029312 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 39329280 bytes disk space from 4102029312 limit
ARCH: Error 19809 Creating archive log file to 'D:\APP\HBERCA\FLASH_RECOVERY_AREA\HORIA\ARCHIVELOG\2012_08_08\O1_MF_1_92_%U_.ARC'
Errors in file d:\app\hberca\diag\rdbms\horia\horia\trace\horia_ora_8868.trc:
ORA-16038: log 2 sequence# 92 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: 'D:\APP\HBERCA\ORADATA\HORIA\REDO02.LOG'
USER (ospid: 8868): terminating the instance due to error 16038
Wed Aug 08 20:58:38 2012
---------------------------------------------------
---------------------------------------------------
There is yet another trick you can do.
If you are running your db with spfile, it's obvious that you cannot edit it as it is a binary file.
It is known that Oracle uses the last specified value for a parameter.
We build a pfile with this entry:
spfile=D:\app\hberca\product\11.2.0\dbhome_1\database\SPFILEHORIA.ORA
db_recovery_file_dest_size= 5912M
And then startup the database with the pfile we just created
SQL> startup pfile=pfileHORIA.ora
ORACLE instance started.
Total System Global Area 1670221824 bytes
Fixed Size 2176328 bytes
Variable Size 1526729400 bytes
Database Buffers 134217728 bytes
Redo Buffers 7098368 bytes
Database mounted.
Database opened.
SQL>
Tuesday, July 31, 2012
RMAN Duplicate does not take last incremental backup
We are in 11gR2.
For incremental level 1 backups (differential) followed by a RMAN Duplicate command, it has been observed that
RMAN ignores the last incremental backup performed, which in turn can cause that you will loose some data in your recently duplicated DB.
We have found a workaround for this behavior, by issuing a "alter system checkpoint" at the end of the backup script.
Oracle documents a bug only for the cumulative backups, with expected resolution in version 12c.
Reference:
Rman Duplicate Does Not Use The Most Recent Incremental Backup [ID 1195673.1]
For incremental level 1 backups (differential) followed by a RMAN Duplicate command, it has been observed that
RMAN ignores the last incremental backup performed, which in turn can cause that you will loose some data in your recently duplicated DB.
We have found a workaround for this behavior, by issuing a "alter system checkpoint" at the end of the backup script.
Oracle documents a bug only for the cumulative backups, with expected resolution in version 12c.
Reference:
Rman Duplicate Does Not Use The Most Recent Incremental Backup [ID 1195673.1]
Monday, May 14, 2012
Why the Redo does not get transported to the standby destination?
I am in an 11gR2 environment, with an active data guard configuration.
After some maintenance activities which required the servers down, I saw the redo does not get transported to one of our standby dbs.
On the primary:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select name,db_unique_name,database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
PROD PROD PRIMARY
The correct db view to start your investigation is V$ARCHIVE_DEST_STATUS.
V$ARCHIVE_DEST_STATUS displays runtime and configuration information for the
archived redo log destinations. The information in this view does not persist across an
instance shutdown.
You can perform the following query on the redo source database to determine the most recently archived redo log file at each
redo transport destination.
The output is straight forward, and indicates no listener at one of our destination.
SQL> select destination, status,error from v$archive_dest_status where destination like 'STDBY'; ';
DESTINATION
--------------------------------------------------------------------------------
STATUS ERROR
--------- -----------------------------------------------------------------
STDBY
ERROR ORA-12541: TNS:no listener
On the physical standby:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select name,db_unique_name, database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
PROD STDBY PHYSICAL STANDBY
The action was as simple as starting up the listener and the error was corrected.
The redo started to get transported and applied on the standby server.
SQL> select destination, status,error from v$archive_dest_status where destination like 'STDBY';
DESTINATION
--------------------------------------------------------------------------------
STATUS ERROR
--------- -----------------------------------------------------------------
STDBY
VALID
After some maintenance activities which required the servers down, I saw the redo does not get transported to one of our standby dbs.
On the primary:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select name,db_unique_name,database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
PROD PROD PRIMARY
The correct db view to start your investigation is V$ARCHIVE_DEST_STATUS.
V$ARCHIVE_DEST_STATUS displays runtime and configuration information for the
archived redo log destinations. The information in this view does not persist across an
instance shutdown.
You can perform the following query on the redo source database to determine the most recently archived redo log file at each
redo transport destination.
The output is straight forward, and indicates no listener at one of our destination.
SQL> select destination, status,error from v$archive_dest_status where destination like 'STDBY'; ';
DESTINATION
--------------------------------------------------------------------------------
STATUS ERROR
--------- -----------------------------------------------------------------
STDBY
ERROR ORA-12541: TNS:no listener
On the physical standby:
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> select name,db_unique_name, database_role from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE
--------- ------------------------------ ----------------
PROD STDBY PHYSICAL STANDBY
The action was as simple as starting up the listener and the error was corrected.
The redo started to get transported and applied on the standby server.
SQL> select destination, status,error from v$archive_dest_status where destination like 'STDBY';
DESTINATION
--------------------------------------------------------------------------------
STATUS ERROR
--------- -----------------------------------------------------------------
STDBY
VALID
Subscribe to:
Posts (Atom)