Skip to main content

Tablespace usage details (Current size,used size and free space with extendable)

Need tablespace usage details? want to check how much space used from current size and max size of the tablespace? Below are some queries to check the free space with extendable size,used space & their percentage of usage.

The below query is to check the particular tablespace usage details

set lines 300
col TABLESPACE_NAME for a30
col file_name for a45
select
   a.TABLESPACE_NAME,
   round(avail,1) curr_size_MB,
   round(used,1) used_MB,
   round(total,1) Maxsize_MB,
   round(free+extentable_MB,1) "Free+extendable_MB",
   round(100*(1-(free+extentable_MB)/total),1)"Usage %"
from (
      select
        TABLESPACE_NAME,
        sum(BYTES)/(1024*1024) avail,
        sum(MAXBYTES)/(1024*1024) total,
        (sum(MAXBYTES)/(1024*1024) - sum(BYTES)/(1024*1024)) extentable_MB
      from dba_data_files
      group by TABLESPACE_NAME) a,
     (
      select
         TABLESPACE_NAME,
         sum(BYTES)/(1024*1024) free
      from dba_free_space group by TABLESPACE_NAME) b,
     (
      select
         TABLESPACE_NAME,
         BYTES/(1024*1024) Used
      from sm$ts_used) c
where a.TABLESPACE_NAME=b.TABLESPACE_NAME and
      a.TABLESPACE_NAME=c.TABLESPACE_NAME  and
      a.TABLESPACE_NAME='DATA'
order by 4 DESC;

Output:



The below query is to check the all tablespaces usage details in the db

set lines 300
col TABLESPACE_NAME for a30
col file_name for a45
select 
   a.TABLESPACE_NAME,
   round(avail,1) curr_size_MB,
   round(used,1) used_MB,
   round(total,1) Maxsize_MB,
   round(free+extentable_MB,1) "Free+extendable_MB", 
   round(100*(1-(free+extentable_MB)/total),1)"Usage %" 
from (
      select 
        TABLESPACE_NAME,
        sum(BYTES)/(1024*1024) avail,
        sum(MAXBYTES)/(1024*1024) total,
        (sum(MAXBYTES)/(1024*1024) - sum(BYTES)/(1024*1024)) extentable_MB 
      from dba_data_files 
      group by TABLESPACE_NAME) a,
     (
      select 
         TABLESPACE_NAME,
         sum(BYTES)/(1024*1024) free 
      from dba_free_space group by TABLESPACE_NAME) b,
     (
      select 
         TABLESPACE_NAME,
         BYTES/(1024*1024) Used 
      from sm$ts_used) c 
where a.TABLESPACE_NAME=b.TABLESPACE_NAME and 
      a.TABLESPACE_NAME=c.TABLESPACE_NAME 
order by 4 DESC;

Output:








Both queries given above will provide correct output only for the tablespaces which is auto extendable.


If you want to check the free space,used space and current size of the tablespace which is not auto extendable, please use the below query

All tablespaces usage details:
set pages 1000
set pages 1000
break on report
compute sum of Total_MB on report
compute sum of Used_MB on report
compute sum of Free_MB on report
select a.TABLESPACE_NAME, round(Curr_Size,1) Total_MB,round(Used,1) Used_MB,round(Free,1) Free_MB, round(100*(1-free/Curr_Size),1) Usage
from (select TABLESPACE_NAME,BYTES/(1024*1024) Curr_Size from sm$ts_avail) a
,(select TABLESPACE_NAME,BYTES/(1024*1024) Used from sm$ts_used) b,
(select TABLESPACE_NAME,BYTES/(1024*1024) Free from sm$ts_free) c
where a.TABLESPACE_NAME=b.TABLESPACE_NAME(+) and 
a.TABLESPACE_NAME=c.TABLESPACE_NAME order by 5 DESC;

Output:







Single tablespace usage details:
set pages 1000
clear computes
select a.TABLESPACE_NAME, round(Curr_Size,1) Total_MB,round(Used,1) Used_MB,round(Free,1) Free_MB, round(100*(1-free/Curr_Size),1) Usage
from (select TABLESPACE_NAME,BYTES/(1024*1024) Curr_Size from sm$ts_avail) a
,(select TABLESPACE_NAME,BYTES/(1024*1024) Used from sm$ts_used) b,
(select TABLESPACE_NAME,BYTES/(1024*1024) Free from sm$ts_free) c
where a.TABLESPACE_NAME=b.TABLESPACE_NAME(+) and 
a.TABLESPACE_NAME=c.TABLESPACE_NAME and a.TABLESPACE_NAME='RETEK_' order by 1 ASC;

Output:


We'll see datafiles' usage in the next post. Until then bye to all Oracle dba.

Comments

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for