Skip to main content

Posts

Showing posts from May, 2014

Legacy Mode Active due to the following parameters while expdp/impdp

Today i got to know about legacy mode in data pump utility during export and import instead "exclude=statistics" parameter, i mentioned "statistics=none" parameter with expdp. Syntax: ======= expdp tables=s1.t1,s1.t2,s3.t3,s4.t4 directory=DATA_PUMP_DIR dumpfile=expdp_tables_28052014.dmp logfile=expdp_tables_28052014.log statistics=none export log output: =============== Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ;;; Legacy Mode Active due to the following parameters: ;;; Legacy Mode Parameter: "statistics=none" Location: Command Line, ignored. ;;; Legacy Mode has set reuse_dumpfiles=true parameter. But the export completed successfully. When i googled about Legacy Mode, i came to know the following facts. In 11gR2, Oracle decides to introduce Data Pump Legacy Mode in order to provide backward co

Online Segment Shrink

Why row movement to be enabled before shrinking the segments? The shrinking is accomplished by moving rows between blocks,hence the requirement for row movement to be enabled for the shrink to take place. This can cause problem with ROWID based triggers. The shrinking process is only available for objects in tablespaces with automatic segment-space management enabled. Online Segment Shrink ================== Based on the recommendations from the segment advisor you can recover space from specific objects using one of the variations of the ALTER TABLE ... SHRINK SPACE command. -- Enable row movement. ALTER TABLE scott.emp ENABLE ROW MOVEMENT; -- Recover space and amend the high water mark (HWM). ALTER TABLE scott.emp SHRINK SPACE; -- Recover space, but don't amend the high water mark (HWM). ALTER TABLE scott.emp SHRINK SPACE COMPACT; -- Recover space for the object and all dependant objects. ALTER TABLE scott.emp SHRINK SPACE CASCADE; The shrink is accomplished

To get DDL of the User, Grants, Table, Index & View Definitions

Note: Before getting DDL, set long 50000 in sql prompt. then only you will get full ddl if it has more character To get DDL of the User: Syntax: set long 50000 (It will give you the full output when you set long for higher value) set pages 50000 set lines 300 SELECT dbms_metadata.get_ddl('USER','<schema_name>') FROM dual; To get DDL of the Tablespace: Syntax: select dbms_metadata.get_ddl('TABLESPACE','tablespace_name') from dba_tablespaces; To get DDL of the role granted to user: Syntax: SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','<schema_name>') from dual; Object grants & system grants are important when it gives insufficient privileges error during recompilation of invalid objects after importing the db. You must find the grants for the user who has privilege on some other user's object. To get DDL of the object grants privileges to user: Syntax: SELECT DBMS_METADA

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Some useful RMAN commands to check the status of backup

If we want to track the process of rman which is currently running, use the below sql query to find how much percentage (%) of rman backup has been completed & how much percentage is remaining to complete. Current status of running RMAN process ============================ set lines 300 set pages 1000 col START_TIME for a20 col SID for 99999 select SID, to_char(START_TIME,'dd-mm-yy hh24:mi:ss') START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done, sysdate + TIME_REMAINING/3600/24 end_at from v$session_longops where totalwork > sofar AND opname NOT LIKE '%aggregate%' AND opname like 'RMAN%'; To find the total time elapsed for the RMAN backup in minutes, use the below sql query. ============================================================ set lines 300 set pages 500 set lines 300 set pages 1000 col STATUS for a10 col START_TIME for a20 col END_TIME for a20 select SESSION_KEY, INPUT_TYPE, STATUS,to_char(START_TIME,'mm/dd/yy

SQL query to find Flash Recovery Area (FRA) usage

Please find the sql query below to get the used & free size of flash recovery area Both queries are same. First one gives the output in round integer. Second one gives the exact number. set lines 1000 col name for a35 col SPACE_LIMIT for a15 col SPACE_AVAILABLE for a15 col SPACE_USED for a15 col SPACE_RECLAIMABLE for a17 SELECT NAME,TO_CHAR(SPACE_LIMIT/1024/1024/1024,'999') AS SPACE_LIMIT,TO_CHAR(SPACE_USED/1024/1024/1024,'999') SPACE_USED, TO_CHAR(SPACE_LIMIT/1024/1024/1024 - SPACE_USED/1024/1024/1024+ SPACE_RECLAIMABLE/1024/1024/1024, '999') AS SPACE_AVAILABLE,TO_CHAR(SPACE_RECLAIMABLE/1024/1024/1024,'999') SPACE_RECLAIMABLE,NUMBER_OF_FILES,ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1)  AS PERCENT_FULL FROM V$RECOVERY_FILE_DEST; set lines 200 col NAME for a35 SELECT NAME,(SPACE_LIMIT/1024/1024/1024) AS SPACE_LIMIT,SPACE_USED/1024/1024/1024 AS SPACE_USED, (SPACE_LIMIT/1024/1024/1024) - (SPACE_US

RMAN backup fails with ORA-19513 and ORA-27206 error

Today we received a mail regarding RMAN backup of archive log to tape got failed from the customer. RMAN backup error details are below RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at 05/06/2014 20:05:31 ORA-19513: failed to identify sequential file ORA-27206: requested file not found in media management catalog From the above message we can easily identify that they are taking a backup to tape directly from the network server not from the db server. I have checked MOSC note related to the above error & found one document which is described below. The MOSC note is  " RMAN Backup to tape using Legato Networker fails ORA-19513 and ORA-27206 (Doc ID 1312448.1)" According to that note, CHANGES: A new DNS alias was created on the Network for the Server or new CAUSE: This error is raised whenever the Media Manager, Legato Networker in this case, is unable to write the file to tape. The creation of new DNS alias can

krsv_proc_kill: Killing processes (Process by index) message in the alert log

Some times we'll get the following warning messages in the alert log of the database continuously. Alert log message: ============== WARN: ARC2: Terminating ARCH (pid 10111) hung on a disk operation Mon May 05 22:13:32 2014 krsv_proc_kill: Killing 266287972353 processes (Process by index) Mon May 05 22:20:32 2014 ARC2: Detected ARCH process failure ARC2: STARTING ARCH PROCESSES Mon May 05 22:20:32 2014 ARC1 started with pid=21, OS id=12104 ARC1: Archival started ARC2: STARTING ARCH PROCESSES COMPLETE Mon May 05 22:20:41 2014 Deleted Oracle managed file /data/oracle/flash_recovery_area/ORCL/archivelog/2014_05_05/o1_mf_1_29497_9phnw343_.arc           Do not panic when you see the above message.  As long as there are no other side-effects this error can be ignored.  It occurs when disk IO & CPU load are high in the server.  Normally we are getting this warning messages when ever backup runs since disk io will be very high during the backup. During high disk IO

To find or calculate IOPS of an Oracle database

We'll have a situation to calculate IOPS (Input Output Per Second) of oracle database at times to know the performance bottleneck of an oracle database regarding IO or when planning capacity for new hardware implementation. Though we can find it from AWR report, Below is the sql statement to calculate the IOPS of an Oracle database IOPS for entire day: ============== ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'; break on report compute sum of value on report select METRIC_NAME,avg(AVERAGE) value from dba_hist_sysmetric_summary where begin_time between to_date('16-FEB-14 00:00:00', 'dd-MON-yy hh24:mi:ss') and to_date('16-FEB-14 23:59:59', 'dd-MON-yy hh24:mi:ss') and end_time like '%16-FEB-14%' and  METRIC_NAME in ('Physical Read Total IO Requests Per Sec','Physical Write Total IO Requests Per Sec') group by METRIC_NAME; You can change the time interval in the above sql query based on your req

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for

Sessions generating more redo or SQL queries generate heavy archive logs

Many times customer would ask us  that why more number of archive logs are being generated & flash recovery area (FRA) is getting filled with archive logs?  What are the queries or sessions involved in generating more archive logs? below we can see the sql query to find which sessions and sql statements are generating more redo or more number of archive logs.... current sessions generating redo ================================= set lines 2000 set pages 1000 col sid for 99999 col name for a09 col username for a14 col PROGRAM for a21 col MODULE for a25 select s.sid,sn.SERIAL#,n.name, round(value/1024/1024,2) redo_mb, sn.username,sn.status,substr (sn.program,1,21) "program", sn.type, sn.module,sn.sql_id from v$sesstat s join v$statname n on n.statistic# = s.statistic# join v$session sn on sn.sid = s.sid where n.name like 'redo size' and s.value!=0 order by redo_mb desc; From the above query we can have the session and sql id which generates redo

SQL queries which is using more CPU resources

Here we'll see how to find heavy or more cpu consumed sql query in oracle We are facing high CPU load on Linux servers daily at times. Whenever CPU load is high, we'll get the TOP output and if the load is due to oracle database, we'll track currently running sql queries which is using more CPU on database & update the customer when they ask RCA report of high CPU load on servers. The below query is to find the sql queries which is causing CPU load on server & using more CPU resources currently. sessions based on cpu usage : ----------------------------------------- set pages 1000 set lines 1000 col OSPID for a06 col SID for 99999 col SERIAL# for 999999 col SQL_ID for a14 col USERNAME for a15 col PROGRAM for a23 col MODULE for a18 col OSUSER for a10 col MACHINE for a25 select * from ( select p.spid "ospid", (se.SID),ss.serial#,ss.SQL_ID,ss.username,substr(ss.program,1,22) "program",ss.module,ss.osuser,ss.MACHINE,ss.status,