Skip to main content

SQL queries which is using more CPU resources

Here we'll see how to find heavy or more cpu consumed sql query in oracle

We are facing high CPU load on Linux servers daily at times. Whenever CPU load is high, we'll get the TOP output and if the load is due to oracle database, we'll track currently running sql queries which is using more CPU on database & update the customer when they ask RCA report of high CPU load on servers.

The below query is to find the sql queries which is causing CPU load on server & using more CPU resources currently.

sessions based on cpu usage :
-----------------------------------------
set pages 1000
set lines 1000
col OSPID for a06
col SID for 99999
col SERIAL# for 999999
col SQL_ID for a14
col USERNAME for a15
col PROGRAM for a23
col MODULE for a18
col OSUSER for a10
col MACHINE for a25
select * from (
select p.spid "ospid",
(se.SID),ss.serial#,ss.SQL_ID,ss.username,substr(ss.program,1,22) "program",ss.module,ss.osuser,ss.MACHINE,ss.status,
se.VALUE/100 cpu_usage_sec
from
v$session ss,
v$sesstat se,
v$statname sn,
v$process p
where
se.STATISTIC# = sn.STATISTIC#
and
NAME like '%CPU used by this session%'
and
se.SID = ss.SID
and ss.username !='SYS' and
ss.status='ACTIVE'
and ss.username is not null
and ss.paddr=p.addr and value > 0
order by se.VALUE desc);


The output looks like below
sessions based on cpu usage







The above query will give you everything including OS process id (PID)

Compare the above query output with the output of TOP command in Linux by using pid. 
You will get a clear picture about the cpu load in terms of oracle database.

TOP command output













At times pid using more cpu in top command will be an inactive session.
The below query is to find the inactive session and their details 

set pages 1000
set lines 1000
col SPID for a06
col SID for 99999
col SERIAL# for 999999
col SQL_ID for a14
col USERNAME for a10
col PROGRAM for a30
col MODULE for a18
col OSUSER for a15
col MACHINE for a25
select p.spid,s.sid,s.serial#,s.sql_id,s.username,s.status,s.program,s.module,s.osuser,s.machine
from v$session s, v$process p where s.paddr=p.addr and s.status like '%INACTIVE%';


If you want to track the session details by using PID listing in top output when cpu load is high, you can use below query only one time. After getting output, you just put / and enter then it'll ask enter your ospid then paste your PID alone and enter. You'll get the output. You can track as many as you can by viewing top output in parallel in the another session.

set pages 100
set lines 1000
col SPID heading 'SPID' for a06
col SID heading 'SID' for 99999
col SERIAL# heading 'serial' for 999999
col SQL_ID for a14
col USERNAME for a10
col PROGRAM for a17
col MODULE for a08
col OSUSER for a07
col MACHINE for a20
col TRACEFILE for a40
select p.spid,s.sid,s.serial#,s.username,s.status,s.sql_id,s.program,s.module,s.osuser,s.machine,
s.EVENT from v$session s, v$process p where s.paddr=p.addr and p.spid=&ospid;

Comments

  1. Thank you for sharing, this is really useful. It would be nice if you had some newsletter in the blog to keep following new posts.
    Foued

    ReplyDelete
    Replies
    1. Dear Foued, Thank you for your comment. At the bottom of the page, I have added one gadget to follow my post by email.

      Delete

Post a Comment

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for