Skip to main content

Finding INSERT,UPDATE & DELETE sql statements contributed for more or heavy archive log generation

We received a mail from customer stating that - Heavy business activity today after 10AM and especially between 16:00 to 17:00 hrs where we see 47 archive log switch and 100GB archive generated today in today after 10AM.

Using below sql query you can check in which interval more archive log generated in the database.

col day for a12
set lines 1000
set pages 999
col "00" for a3
col "01" for a3
col "02" for a3
col "03" for a3
col "04" for a3
col "05" for a3
col "06" for a3
col "07" for a3
col "08" for a3
col "09" for a3
col "10" for a3
col "11" for a3
col "12" for a3
col "13" for a3
col "14" for a3
col "15" for a3
col "16" for a3
col "17" for a3
col "18" for a3
col "19" for a3
col "20" for a3
col "21" for a3
col "22" for a3
col "23" for a3
SELECT
to_char(first_time,'DD-MON-YYYY') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99')  "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23",
count(*) Tot
from
v$log_history
WHERE first_time like '%17-NOV-14%'
GROUP by
to_char(first_time,'DD-MON-YYYY'),trunc(first_time) order by trunc(first_time);

Output:


As customer said, At 4 PM there 47 archive logs generated which means (between the time duration 16 & 17)

Note: 
first_time like '%17-NOV-14%' (date format in my db is 17-NOV-14)

So here Our aim is to get all sql queries (update,insert & delete) involved in heavy redo generation between 16:00 and 17:00 hrs (1 hour) (in a particular time interval).

One of our dba used the steps described in the post
 http://oradba11g.blogspot.in/2014/05/sometimes-customer-would-ask-sql-query.html

So he identified the two update sql statements which made maximum number of block changes in the db at that time & sent to customer.

Then we got another mail from customer stating that
  • Are we sure that these are only two major contributor? Can you please check because I spotted few other heavy programs which were running but does not run this sql.
  • Can you please tell what was main reason for archive generation between 4 to 5pm today. We had 47 log switch in this hour.
To answer these questions i used the below procedures to spot all UPDATE, DELETE & INSERT sql statements executed by users at the particular time ordered by executions:

set lines 1000
set pages 50
set long 50000
col c1 heading Begin_Interval_time format a25
col c2 heading Exec format 9999999
col SQL_TEXT for a100
break on c1
select s.begin_interval_time  c1,sql.sql_id,
  sql.PLAN_HASH_VALUE HASH_VALUE,
    sql.executions_delta     c2
 from
   dba_hist_sqlstat sql,
   dba_hist_snapshot s,
   v$sqltext st
where
   s.snap_id=sql.snap_id and 
   sql.sql_id=st.sql_id and st.SQL_TEXT like 'DELETE%'
and
  s.begin_interval_time like '%17-NOV-14 04.00.04.731 PM%' order by c2 desc;

Note:
  • You can edit the columns & begin_interval_time (i.e.snap id generated time)  that you required in the above query as per your choice.
  • You can change the st.SQL_TEXT like 'UPDATE% for spotting update query
  • You can change the st.SQL_TEXT like 'INSERT% for spotting insert query
  • User executed sql statements will be in capital letters & db generated sql statements will be in small letters. Hence i used UPDATE, DELETE & INSERT words in capital letter.

Here my guess is that query executed more number of times (i.e. Number of executions) is the biggest contributor for heavy archive log generation.

Output for UPDATE:








Output for INSERT:




Output for DELETE:




I got the sql statements using sql_id in the above output from v$sqltext view (Query is given below)

set pages 50000
col SQL_TEXT for a75
select sql_text from v$sqltext where sql_id ='&sql_id' order by piece;

Please also refer the related posts written by me:

Comments

Post a Comment

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for