Skip to main content

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period

As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window...

Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below.

Step 1:
======
First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour...

col day for a12
set lines 1000
set pages 999
col "00" for a3
col "01" for a3
col "02" for a3
col "03" for a3
col "04" for a3
col "05" for a3
col "06" for a3
col "07" for a3
col "08" for a3
col "09" for a3
col "10" for a3
col "11" for a3
col "12" for a3
col "13" for a3
col "14" for a3
col "15" for a3
col "16" for a4
col "17" for a3
col "18" for a4
col "19" for a3
col "20" for a3
col "21" for a3
col "22" for a3
col "23" for a3
SELECT 
to_char(first_time,'DD-MON-YYYY') day, 
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99')  "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23",
count(*) Tot
from
v$log_history
WHERE first_time > sysdate -7  
GROUP by 
to_char(first_time,'DD-MON-YYYY'),trunc(first_time) order by trunc(first_time);

(place condition above in "where" according to your requirement)

From the output, we can find the problematic hour for more archive log generation.

Siep 2:
=======
We must find the table(s) or object(s) which has been undergone for more number of db block changes.

http://www.dba-oracle.com/m_db_block_changes.htm   says that 

"The db block changes Oracle metric is closely associated to consistent changes, this statistics counts the total number of changes that were made to all blocks in the SGA that were part of an update or delete operation. These are changes that are generating redo log entries and hence will be permanent changes to the database if the transaction is committed."


To know the objects and their db block changes at particular time interval use the below sql query.

SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,sum(db_block_changes_delta) as maxchanges
FROM dba_hist_seg_stat dhss,dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj# AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time BETWEEN to_date('2014-03-02 17:00:00','YYYY-MM-DD HH24:MI:SS')
AND to_date('2014-03-02 20:00:00','YYYY-MM-DD HH24:MI:SS')
GROUP BY to_char(begin_interval_time,'YYYY-MM-DD HH24:MI:SS'),
dhsso.object_name order by maxchages asc;


Output (in ascending order):
========================
SNAP_TIME            OBJECT_NAME                                     MAXCHAGES
-------------------    ------------------------------              ------------------------
2013_05_22 17:00 MGMT_CURRENT_AVAILABILITY_PK         7104
2013_05_22 17:00 MGMT_AVAIL_MARKER_PK                          7312
2013_05_22 18:00 MGMT_METRICS_1HOUR_PK_INT               20112
2013_05_22 17:00 MGMT_METRICS_1HOUR_PK_INT               20864
2013_05_22 20:00 MGMT_METRICS_1HOUR_PK_INT               21152
013_05_22 17:00 STAGED_MESSAGE_PARAMS                       434720
2013_05_22 18:00 STAGED_MESSAGE_PARAMS                     551968
2013_05_22 20:00 STAGED_MESSAGE_PARAMS                     554816
2013_05_22 19:01 STAGED_MESSAGE_PARAMS                     560944
2013_05_22 17:00 STAGED_MESSAGEPK_IDX                          587168
2013_05_22 19:01 STAGED_MESSAGE_PK_IDX                        737296
2013_05_22 20:00 STAGED_MESSAGE_PK_IDX                        739120
2013_05_22 18:00 STAGED_MESSAGE_PK_IDX                        742656

Step 3:
=======
We can know that maximum db block changes have been made into STAGED_MESSAGE%.
We can easily find the sql statement caused more archive log generation using the above object & below sql query.

col SQL_TEXT for a60
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
         dbms_lob.substr(sql_text,5000,1),
         dhss.instance_number,
         dhss.sql_id,executions_delta,rows_processed_delta
  FROM dba_hist_sqlstat dhss,
         dba_hist_snapshot dhs,
         dba_hist_sqltext dhst
  WHERE upper(dhst.sql_text) LIKE '%STAGED_MESSAGE%'
    AND dhss.snap_id=dhs.snap_id
    AND dhss.instance_Number=dhs.instance_number
 AND begin_interval_time BETWEEN to_date('2014-03-02 17:00:00','YYYY-MM-DD HH24:MI:SS')
                                           AND to_date('2014-03-02 20:00:00','YYYY-MM-DD HH24:MI:SS')
    AND dhss.sql_id = dhst.sql_id;

Step 4:
======
Below sql query is to find the user id and program using the sql_id from the above ouput

SELECT to_char(sample_time,'yyyy_mm_dd hh24:mi:ss'),user_id,program
FROM dba_hist_active_sess_history  WHERE sql_id in ('5bpaj6hmpjq5h','c3utht63bffn9','86vxuhgt0fc80','aqkdd2dgqqr4p')
AND to_char(sample_time,'YYYY-MM-DD HH24:MI:SS') between '2014-03-02 17:00:00' and '2014-03-02 20:00:00' order by 1;

Comments

Post a Comment

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan