Skip to main content

Dropping many tables at a time using pl/sql script

Hello all,

Here i'm going to give you the script to drop more number of  tables at one go. The below script is being used in our environment to drop more number of tables at the same time whenever we receive a mail from our client. This may help you also if you are in a situation. It's a best option to save time instead of dropping tables one by one. Tomorrow also we will have an activity to drop 111 tables which occupied 105 GB of space & reclaiming physical disk space activity followed by table drop.

set timing on
SET SERVEROUTPUT ON
begin
     for X in (
select OWNER,SEGMENT_NAME from dba_segments where SEGMENT_NAME in (
'BKP_31MAY14',
'BKP_26MAY14',
'BKP_27MAY14',
'BKP_03JUN14',
'BKP_29MAY14',
'BKP_25MAY14',
'BKP_28MAY14',
'BKP_22MAY14',
'BKP_30MAY14',
'BKP_23MAY14',
'BKP_24MAY14',
'BKP_19MAY14',
'BKP_21MAY14',
'BKP_20MAY14',
'BKP_18MAY14',
'BKP_17MAY14',
'BKP_15MAY14',
'BKP_10MAY14',
'BKP_12MAY14',
'BKP_14MAY14',
'BKP_28APR14',
'BKP_07MAY14',
'BKP_16MAY14',
'BKP_11MAY14',
'BKP_02MAY14',
'BKP_20APR14',
'BKP_24APR14',
'BKP_27APR14',
'BKP_09MAY14',
'BKP_19APR14',
'BKP_23APR14',
'BKP_06MAY14',
'BKP_04MAY14',
'BKP_30APR14',
'BKP_13MAY14',
'BKP_22APR14',
'BKP_29APR14',
'BKP_25APR14',
'BKP_26APR14',
'BKP_05MAY14',
'BKP_08MAY14',
'BKP_15APR14',
'BKP_21APR14',
'BKP_14APR14',
'BKP_11APR14',
'BKP_18APR14',
'BKP_07APR14',
'BKP_03MAY14',
'BKP_17APR14',
'BKP_10APR14',
'BKP_06APR14',
'BKP_31MAR14',
'BKP_02APR14',
'BKP_16APR14',
'BKP_13APR14',
'BKP_09APR14',
'BKP_12APR14',
'BKP_08APR14',
'BKP_05APR14',
'BKP_04APR14',
'BKP_30MAR14',
'BKP_28MAR14',
'BKP_29MAR14',
'BKP_03APR14',
'BKP_20MAR14',
'BKP_18MAR14',
'BKP_15MAR14',
'BKP_27MAR14',
'BKP_25MAR14',
'BKP_17MAR14',
'BKP_19MAR14',
'BKP_10MAR14',
'BKP_12MAR14',
'BKP_11MAR14',
'BKP_21MAR14',
'BKP_23MAR14',
'BKP_24MAR14',
'BKP_28FEB14',
'BKP_07MAR14',
'BKP_14MAR14',
'BKP_13MAR14',
'BKP_26MAR14',
'BKP_22MAR14',
'BKP_16MAR14',
'BKP_31JAN14',
'BKP_31DEC13',
'BKP_01DEC13',
'BKP_30NOV13',
'BKP_01NOV13',
'BKP_31OCT13',
'BKP_01OCT13',
'BKP_30SEP13',
'BKP_31JUL13',
'WC_IL_TRAITS_DS_INS',
'BKP_31AUG13',
'BKP_30JUN13',
'BKP_01JUL13',
'BKP_31MAY',
'BKP_01JUN',
'BKP_30APR',
'BKP_31MAR',
'BKP_28FEB',
'BKP_31JAN',
'BKP_31DEC') order by 1,2
)
loop
execute immediate 'drop table '|| X.OWNER ||'.'|| X.SEGMENT_NAME || ' cascade constraints purge';
     end loop;
end;

Comments

Post a Comment

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfil...

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan...

Metrics Process Limit Usage percentage (%) is critical or warning in OEM or Cloud Control 12c or Cloud Control 13c

Please run the below queries to trace details during Metric alert "Process Limit % is >= 75" in OEM or Cloud Control 12c or Cloud Control 13c To check the Metrics Process Limit Usage (%): ***************************************************** set lines 300 col INITIAL_ALLOCATION for a17 col LIMIT_VALUE for a12 select resource_name,current_utilization,max_utilization, INITIAL_ALLOCATION,LIMIT_VALUE,((current_utilization*100)/(INITIAL_ALLOCATION)) as "Process limit %" from v$resource_limit where resource_name in ('processes'); The below query will give you all connected sessions (ACTIVE and INACTIVE) to instance: ******************************************************************************************************* set pages 100 set lines 1000 col SPID heading 'PID' for a06 col SID for 99999 col SERIAL# heading 'serial' for 999999 col SQL_ID for a14 col USERNAME for a10 col PROGRAM for a20 col MODULE for a18 col OSU...