Finding reason or cause for heavy or more archive log generation in a particular time period
As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window...
Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below.
As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window...
Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below.
Step 1:
======
First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour...
col day for a12
set lines 1000
set pages 999
col "00" for a3
col "01" for a3
col "02" for a3
col "03" for a3
col "04" for a3
col "05" for a3
col "06" for a3
col "07" for a3
col "08" for a3
col "09" for a3
col "10" for a3
col "11" for a3
col "12" for a3
col "13" for a3
col "14" for a3
col "15" for a3
col "16" for a4
col "17" for a3
col "18" for a4
col "19" for a3
col "20" for a3
col "21" for a3
col "22" for a3
col "23" for a3
SELECT
to_char(first_time,'DD-MON-YYYY') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23",
count(*) Tot
from
v$log_history
WHERE first_time > sysdate -7
GROUP by
to_char(first_time,'DD-MON-YYYY'),trunc(first_time) order by trunc(first_time);
(place condition above in "where" according to your requirement)
From the output, we can find the problematic hour for more archive log generation.
Siep 2:
=======
We must find the table(s) or object(s) which has been undergone for more number of db block changes.
http://www.dba-oracle.com/m_db_block_changes.htm says that
"The db block changes Oracle metric is closely associated to consistent changes, this statistics counts the total number of changes that were made to all blocks in the SGA that were part of an update or delete operation. These are changes that are generating redo log entries and hence will be permanent changes to the database if the transaction is committed."
To know the objects and their db block changes at particular time interval use the below sql query.
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,sum(db_block_changes_delta) as maxchanges
FROM dba_hist_seg_stat dhss,dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj# AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time BETWEEN to_date('2014-03-02 17:00:00','YYYY-MM-DD HH24:MI:SS')
AND to_date('2014-03-02 20:00:00','YYYY-MM-DD HH24:MI:SS')
GROUP BY to_char(begin_interval_time,'YYYY-MM-DD HH24:MI:SS'),
dhsso.object_name order by maxchages asc;
From the output, we can find the problematic hour for more archive log generation.
Siep 2:
=======
We must find the table(s) or object(s) which has been undergone for more number of db block changes.
http://www.dba-oracle.com/m_db_block_changes.htm says that
"The db block changes Oracle metric is closely associated to consistent changes, this statistics counts the total number of changes that were made to all blocks in the SGA that were part of an update or delete operation. These are changes that are generating redo log entries and hence will be permanent changes to the database if the transaction is committed."
To know the objects and their db block changes at particular time interval use the below sql query.
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,sum(db_block_changes_delta) as maxchanges
FROM dba_hist_seg_stat dhss,dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj# AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time BETWEEN to_date('2014-03-02 17:00:00','YYYY-MM-DD HH24:MI:SS')
AND to_date('2014-03-02 20:00:00','YYYY-MM-DD HH24:MI:SS')
GROUP BY to_char(begin_interval_time,'YYYY-MM-DD HH24:MI:SS'),
dhsso.object_name order by maxchages asc;
Output (in ascending order):
========================
SNAP_TIME OBJECT_NAME MAXCHAGES
------------------- ------------------------------ ------------------------
2013_05_22 17:00 MGMT_CURRENT_AVAILABILITY_PK 7104
2013_05_22 17:00 MGMT_AVAIL_MARKER_PK 7312
2013_05_22 18:00 MGMT_METRICS_1HOUR_PK_INT 20112
2013_05_22 17:00 MGMT_METRICS_1HOUR_PK_INT 20864
2013_05_22 20:00 MGMT_METRICS_1HOUR_PK_INT 21152
013_05_22 17:00 STAGED_MESSAGE_PARAMS 434720
2013_05_22 18:00 STAGED_MESSAGE_PARAMS 551968
2013_05_22 20:00 STAGED_MESSAGE_PARAMS 554816
2013_05_22 19:01 STAGED_MESSAGE_PARAMS 560944
2013_05_22 17:00 STAGED_MESSAGEPK_IDX 587168
2013_05_22 19:01 STAGED_MESSAGE_PK_IDX 737296
2013_05_22 20:00 STAGED_MESSAGE_PK_IDX 739120
2013_05_22 18:00 STAGED_MESSAGE_PK_IDX 742656
Step 3:
=======
We can know that maximum db block changes have been made into STAGED_MESSAGE%.
We can easily find the sql statement caused more archive log generation using the above object & below sql query.
col SQL_TEXT for a60
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
dbms_lob.substr(sql_text,5000,1),
dhss.instance_number,
dhss.sql_id,executions_delta,rows_processed_delta
FROM dba_hist_sqlstat dhss,
dba_hist_snapshot dhs,
dba_hist_sqltext dhst
WHERE upper(dhst.sql_text) LIKE '%STAGED_MESSAGE%'
AND dhss.snap_id=dhs.snap_id
AND dhss.instance_Number=dhs.instance_number
AND begin_interval_time BETWEEN to_date('2014-03-02 17:00:00','YYYY-MM-DD HH24:MI:SS')
AND to_date('2014-03-02 20:00:00','YYYY-MM-DD HH24:MI:SS')
AND dhss.sql_id = dhst.sql_id;
Step 4:
======
Below sql query is to find the user id and program using the sql_id from the above ouput
SELECT to_char(sample_time,'yyyy_mm_dd hh24:mi:ss'),user_id,program
FROM dba_hist_active_sess_history WHERE sql_id in ('5bpaj6hmpjq5h','c3utht63bffn9','86vxuhgt0fc80','aqkdd2dgqqr4p')
AND to_char(sample_time,'YYYY-MM-DD HH24:MI:SS') between '2014-03-02 17:00:00' and '2014-03-02 20:00:00' order by 1;
very interesting , thank you for sharing.
ReplyDelete