Skip to main content

SQL queries to get the blocking session details in oracle database

Please find the sql queries below to get the details of db blocks.

Check_db_blocks :
*********************
set linesize 512
select v.instance_name ||' - '|| l1.sid ||' ('||s1.client_identifier||' - '||s1.module||')' || ' IS BLOCKING ' || l2.sid||' ('||s2.client_identifier||' - '||s2.module||')'
|| s1.sql_id || s1.username || s1.program || s1.machine "BLOCKING SESSIONS"
from v$instance v, v$lock l1, v$lock l2, v$session s1, v$session s2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2
and l1.sid=s1.sid and l2.sid=s2.sid;

Check_db_locks :
********************
set lines 100
select ' SID '||
sid||' ,Serial# '||serial#||' user '||username||' is getting blocked by SID '||blocking_session|| ' for last '||seconds_in_wait||' seconds and wait_event '||event||' for SQL_ID '||sql_id
||' (Locked_Object_ID '||row_wait_obj#||' locked_block_id '||row_wait_block#||' locked_row_id '||row_wait_row#||')' "Active Locks"
from
   v$session
where
   lockwait is not null and ROW_WAIT_OBJ#!=-1 and seconds_in_wait>10
order by
   sid;


To get the blocking SID details
***********************************
set lines 300
col machine for a20
col MODULE for a15
col program for a20
col EVENT for a30
col USERNAME for a10
col OSUSER for a08
select sid,serial#,sql_id,username,status,machine,osuser,module,program,event from v$session where sid=&sid;



col SQL_TEXT for a75
select sql_text from v$sqltext where sql_id ='&sql_id' order by piece;


Useful SQL Statements
***************************
If you encounter a lock related hang scenario the following SQL statements can be used to help isolate the waiters and blockers:

Show all sessions waiting for any lock:
*******************************************
SELECT event, p1, p2, p3 FROM v$session_wait WHERE wait_time= 0 AND event like 'enq%';


SELECT sid, p1raw, p2, p3 FROM v$session_wait WHERE wait_time = 0 AND event like 'enq: TX%';

SELECT sw.sid,p.SPID, sw.p1raw, sw.p2, sw.p3 FROM v$session_wait sw,v$process p WHERE wait_time = 0 AND event like 'enq: TX%';



Show sessions waiting for a TX lock:
*****************************************

SELECT * FROM v$lock WHERE type='TX' AND request>0;


Show sessions holding a TX lock:
**************************************

SELECT * FROM v$lock WHERE type='TX' AND lmode > 0;


Show which segments have undergone the most row lock waits:
************************************************************************

SELECT owner, object_name, subobject_name, value FROM v$segment_statistics 
WHERE statistic_name='row lock waits' AND value > 0 ORDER BY 4 DESC;


Wait about few minutes or so, and then run the script again. You may compare the differences of corresponding entries in the VALUE column to see which object(s) has undergone the most row lock contention.

Comments

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for