Skip to main content

Online Segment Shrink

Why row movement to be enabled before shrinking the segments?

The shrinking is accomplished by moving rows between blocks,hence the requirement for row movement to be enabled for the shrink to take place. This can cause problem with ROWID based triggers. The shrinking process is only available for objects in tablespaces with automatic segment-space management enabled.


Online Segment Shrink
==================
Based on the recommendations from the segment advisor you can recover space from specific objects using one of the variations of the ALTER TABLE ... SHRINK SPACE command.

-- Enable row movement.
ALTER TABLE scott.emp ENABLE ROW MOVEMENT;

-- Recover space and amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE;

-- Recover space, but don't amend the high water mark (HWM).
ALTER TABLE scott.emp SHRINK SPACE COMPACT;

-- Recover space for the object and all dependant objects.
ALTER TABLE scott.emp SHRINK SPACE CASCADE;

The shrink is accomplished by moving rows between blocks, hence the requirement for row movement to be enabled for the shrink to take place. This can cause problem with ROWID based triggers. The shrinking process is only available for objects in tablespaces with automatic segment-space management enabled.

The COMPACT option allows the shrink operation to be broken into two stages. First the rows are moved using the COMPACT option but the HWM is not adjusted so no parsed SQL statements are invalidated. The HWM can be adjusted at a later date by reissuing the statement without the COMPACT option. At this point any depenedant SQL statements will need to be reparsed.

Other shrink commands of interest are displayed below.

-- Shrink a LOB segment.
ALTER TABLE table_name MODIFY LOB(lob_column) (SHRINK SPACE);

-- Shrink an IOT overflow segment.

ALTER TABLE iot_name OVERFLOW SHRINK SPACE;

Comments

Popular posts from this blog

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan...

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfil...

Cloud Control EM13c - Loader Throughput (rows per second) for Loader_D crossed the critical threshold and Total Loader Runtime in the Last Hour (seconds) for Loader_D crossed the critical threshold

Hello Guys, Here is an another post related to Cloud Control EM13c - Real-time Scenario. Suddenly We were receiving the following warning and critical alerts from EM13c Cloud Control during the time slot between 12 AM and 04:00 AM daily. 1)  Message= Loader Throughput (rows per second) for Loader_D crossed the critical threshold (xx). Current value: xx.xx 2)  Message= Total Loader Runtime in the Last Hour (seconds) for Loader_D crossed the critical threshold (x,xxx). Current value: xxxx.xx 3)  ORA-error stack (3,136) and ORA-error stack (609) were also logged in alert log 4)  Message= Incident (BEA-310003 [LowMemory]) detected in $OMS_HOME/gc_inst/user_projects/domains/GCDomain/servers/EMGC_OMS1/adr/diag/ofm/GCDomain/EMGC_OMS1/alert/log.xml at time/line number: When we analyzed the AWR reports of EM PDB repository database, we found there were few PL/SQL statements given below causing this issue with wait event  SQL*Net Break/reset To Client...