Skip to main content

Posts

Showing posts from 2014

Tablespace and datafiles space usage query to find free space,used space and its percentage

Hi, Here i am going to write the queries related to tablespace usage. We can find the free space,used space and its percentage of tablespace and datafiles separately. If your tablespace is not auto extendable, you can use the below query given in 1 a) and 1 b) to get space usage of tablespace 1 a) Combine query with sm$ts_avail,sm$ts_used,sm$ts_free: ===================================================== set pages 1000 break on report compute sum of Total_MB on report compute sum of Used_MB on report compute sum of Free_MB on report select a.TABLESPACE_NAME, round(Curr_Size,1) Total_MB,round(Used,1) Used_MB,round(Free,1) Free_MB, round(100*(1-free/Curr_Size),1) Usage from (select TABLESPACE_NAME,BYTES/(1024*1024) Curr_Size from sm$ts_avail) a ,(select TABLESPACE_NAME,BYTES/(1024*1024) Used from sm$ts_used) b, (select TABLESPACE_NAME,BYTES/(1024*1024) Free from sm$ts_free) c where a.TABLESPACE_NAME=b.TABLESPACE_NAME(+) and a.TABLESPACE_NAME=c.TABLESPACE_NAME order by 1

Analysis of ORA-00600: internal error code, arguments: [kxfxsStatus: nojoin but already sent joinOK]

Hi all, In the alert log we found the below ORA-600 internal error.  Alert log Error is below: ORA-00600: internal error code, arguments: [kxfxsStatus: nojoin but already sent joinOK], [197888], [], [], [], [], [], [], [], [], [], [] We had to report this error with our analysis to our manager. We had searched the related document for this error in Oracle support website and found the following document. Bug 9785632 - Parallel query fails with OERI[kxfxsstatus: nojoin but already sent joinok] (Doc ID 9785632.8) As per above Doc ID 9785632.8, Affects:: Versions confirmed as being affected ·         11.2.0.1 ·         11.1.0.7 Fixed: This issue is fixed in         ·         12.1.0.1 (Base Release) ·         11.2.0.2 (Server Patch Set) ·         11.2.0.1 Bundle Patch 8 for Exadata Database Symptoms: ·         Internal Error May Occur (ORA-600) ·         ORA-600 [kxfxsStatus: nojoin but already sent joinOK] Description: ORA-600:

Analysis of ORA-04031: unable to allocate xxx bytes of shared memory ("shared pool","unknown object","KGLH0^6ab33a","kglob")

Hi all, One of our production databases is frequently experienced below errors in alert log and database became unresponsive due to continuous hit of this error and CPU load also went high at that time. Here the time duration for this error is between Tue Dec 09 15:33:18 2014 and Tue Dec 09 15:50:27 2014. Errors in alert log: ORA-04031: unable to allocate 264 bytes of shared memory ("shared pool","unknown object","KGLH0^6ab33a","kglob") ORA-04031: unable to allocate 120 bytes of shared memory ("shared pool","unknown object","CCUR^857b59d8","kkocsCardEst: kkocsNodeAllocated") ORA-04031: unable to allocate 1072 bytes of shared memory ("shared pool","select HIERARCHY_TYPE, SITE_...","CCUR^28cddb74","kkocs.c.kgght") ORA-04031: unable to allocate 264 bytes of shared memory ("shared pool","unknown object","CCUR^9d1660bd",&

Moving or cloning oracle home and database from one windows server to another windows server

Here we will see how to clone / move oracle 10G R2 (10.2.0.3.0) home and database from one windows server to another windows server. First i'll explain you the situation where the above scenario arose in our environment. Due to some reasons, every VM server in our company is moved to new storage from old storage. While moving one of the server which contains oracle 10g r2 (10.2.0.3.0) database, we got some error. Hence We are not able to complete the task for only that server. So our network team installed a new machine with the same windows operating systems in the new storage. Now our job is to reconfigure same oracle 10g (10.2.0.3.0) database for our local development team in the new server. But we don't have any Oracle 10G R2 software with us. Hence we had raised a SR with Oracle support. But Oracle is not providing Oracle 10G R2 software anymore and it came up with the following reasons. "Hello Ramesh,  Please note that, from 1st of march 2014 all the D

To know free space,used space of each and every datafile in a tablespace

Hi all, Here we'll see how do we check the free space and used space of each datafile separately in the tablespace as i said in previous post Tablespace usage details (Current size,used size and free space with extendable We can use the below query to check the space usage of each and every data file in the tablespaces. break on report compute sum of allocated_mb on report compute sum of used_mb on report compute sum of free_space_mb on report set lines 500 COLUMN free_space_mb format 999999.90 COLUMN allocated_mb format 999999.90 COLUMN used_mb format 999999.90 col file_name for a60 SELECT   SUBSTR (df.NAME, 1, 60) file_name, df.bytes / 1024 / 1024 allocated_mb,          ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))                used_mb,          NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb     FROM v$datafile df, dba_free_space dfs    WHERE df.file# = dfs.file_id(+) and dfs.TABLESPACE_NAME=' _DATA ' GROUP BY dfs

Tablespace usage details (Current size,used size and free space with extendable)

Need tablespace usage details? want to check how much space used from current size and max size of the tablespace? Below are some queries to check the free space with extendable size,used space & their percentage of usage. The below query is to check the particular tablespace usage details set lines 300 col TABLESPACE_NAME for a30 col file_name for a45 select    a.TABLESPACE_NAME,    round(avail,1) curr_size_MB,    round(used,1) used_MB,    round(total,1) Maxsize_MB,    round(free+extentable_MB,1) "Free+extendable_MB",    round(100*(1-(free+extentable_MB)/total),1)"Usage %" from (       select         TABLESPACE_NAME,         sum(BYTES)/(1024*1024) avail,         sum(MAXBYTES)/(1024*1024) total,         (sum(MAXBYTES)/(1024*1024) - sum(BYTES)/(1024*1024)) extentable_MB       from dba_data_files       group by TABLESPACE_NAME) a,      (       select          TABLESPACE_NAME,          sum(BYTES)/(1024*1024) free       from dba_free_space

ORA-00600: internal error code, arguments: [7005],

Hi all, Today i am going to write another one of mine analysis by using My oracle support site document. In one of our client's database, we are getting the following error message continuously. Alert log entry: Wed Jul 23 11:45:43 2014 Errors in file /home/oracle/app/oracle/diag/rdbms/db/db/trace/db_ora_14547.trc  (incident=149824): ORA-00600: internal error code, arguments: [7005], [0], [], [], [], [], [], [], [], [], [], [] SQL_TEXT involved in this error is  select iso_country_code  from countries  where (SDO_RELATE(geom,SDO_GEOMETRY(:"SYS_B_0",:"SYS_B_1"),:"SYS_B_2")=:"SYS_B_3") So one of our dba has reported like below after searching the document for the error. According to Doc ID 138935.1 The error occurred due to Bug 8770557 Description: ORA-600 [7005] from SQL with user defined selectivity function (eg: Text queries) Solution: The issue is fixed in 11.2.0.2, 12.1.0.1 But o

ORA-07445: exception encountered: core dump [kglIsOwnerVersionable()]

Hi all, We have received a mail from customer to investigate above error reported in the database's alert log. Please find the details below. Can you please investigate below error against EBSPRD and let me know the next course of action? Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x24] [PC:0x2587EB9, kglIsOwnerVersionable()+123] [flags: 0x0, count: 1] Errors in file /sw/oraebs/EBSPRD/db/tech_st/11.2.0/admin/EBSPRD_ebsprddb01/diag/rdbms/ebsprd/EBSPRD/trace/EBSPRD_ora_17713.trc  (incident=624688): ORA-07445 : exception encountered: core dump [kglIsOwnerVersionable()+123] [SIGSEGV] [ADDR:0x24] [PC:0x2587EB9] [Address not mapped to object] [] Incident details in: /sw/oraebs/EBSPRD/db/tech_st/11.2.0/admin/EBSPRD_ebsprddb01/diag/rdbms/ebsprd/EBSPRD/incident/incdir_624688/EBSPRD_ora_17713_i624688.trc Non critical error ORA-48913 caught while writing to trace file "/sw/oraebs/EBSPRD/db/tech_st/11.2.0/admin/EBSPRD_ebsprddb01/diag/rdbms/ebsprd/EBSP

Finding INSERT,UPDATE & DELETE sql statements contributed for more or heavy archive log generation

We received a mail from customer stating that - Heavy business activity today after 10AM and especially between 16:00 to 17:00 hrs where we see 47 archive log switch and 100GB archive generated today in today after 10AM. Using below sql query you can check in which interval more archive log generated in the database. col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for a3 col "08" for a3 col "09" for a3 col "10" for a3 col "11" for a3 col "12" for a3 col "13" for a3 col "14" for a3 col "15" for a3 col "16" for a3 col "17" for a3 col "18" for a3 col "19" for a3 col "20" for a3 col "21" for a3 col "22" for a3 col &q

Legacy Mode Active due to the following parameters while expdp/impdp

Today i got to know about legacy mode in data pump utility during export and import instead "exclude=statistics" parameter, i mentioned "statistics=none" parameter with expdp. Syntax: ======= expdp tables=s1.t1,s1.t2,s3.t3,s4.t4 directory=DATA_PUMP_DIR dumpfile=expdp_tables_28052014.dmp logfile=expdp_tables_28052014.log statistics=none export log output: =============== Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ;;; Legacy Mode Active due to the following parameters: ;;; Legacy Mode Parameter: "statistics=none" Location: Command Line, ignored. ;;; Legacy Mode has set reuse_dumpfiles=true parameter. But the export completed successfully. When i googled about Legacy Mode, i came to know the following facts. In 11gR2, Oracle decides to introduce Data Pump Legacy Mode in order to provide backward co

Online Segment Shrink

Why row movement to be enabled before shrinking the segments? The shrinking is accomplished by moving rows between blocks,hence the requirement for row movement to be enabled for the shrink to take place. This can cause problem with ROWID based triggers. The shrinking process is only available for objects in tablespaces with automatic segment-space management enabled. Online Segment Shrink ================== Based on the recommendations from the segment advisor you can recover space from specific objects using one of the variations of the ALTER TABLE ... SHRINK SPACE command. -- Enable row movement. ALTER TABLE scott.emp ENABLE ROW MOVEMENT; -- Recover space and amend the high water mark (HWM). ALTER TABLE scott.emp SHRINK SPACE; -- Recover space, but don't amend the high water mark (HWM). ALTER TABLE scott.emp SHRINK SPACE COMPACT; -- Recover space for the object and all dependant objects. ALTER TABLE scott.emp SHRINK SPACE CASCADE; The shrink is accomplished

To get DDL of the User, Grants, Table, Index & View Definitions

Note: Before getting DDL, set long 50000 in sql prompt. then only you will get full ddl if it has more character To get DDL of the User: Syntax: set long 50000 (It will give you the full output when you set long for higher value) set pages 50000 set lines 300 SELECT dbms_metadata.get_ddl('USER','<schema_name>') FROM dual; To get DDL of the Tablespace: Syntax: select dbms_metadata.get_ddl('TABLESPACE','tablespace_name') from dba_tablespaces; To get DDL of the role granted to user: Syntax: SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT','<schema_name>') from dual; Object grants & system grants are important when it gives insufficient privileges error during recompilation of invalid objects after importing the db. You must find the grants for the user who has privilege on some other user's object. To get DDL of the object grants privileges to user: Syntax: SELECT DBMS_METADA

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Some useful RMAN commands to check the status of backup

If we want to track the process of rman which is currently running, use the below sql query to find how much percentage (%) of rman backup has been completed & how much percentage is remaining to complete. Current status of running RMAN process ============================ set lines 300 set pages 1000 col START_TIME for a20 col SID for 99999 select SID, to_char(START_TIME,'dd-mm-yy hh24:mi:ss') START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done, sysdate + TIME_REMAINING/3600/24 end_at from v$session_longops where totalwork > sofar AND opname NOT LIKE '%aggregate%' AND opname like 'RMAN%'; To find the total time elapsed for the RMAN backup in minutes, use the below sql query. ============================================================ set lines 300 set pages 500 set lines 300 set pages 1000 col STATUS for a10 col START_TIME for a20 col END_TIME for a20 select SESSION_KEY, INPUT_TYPE, STATUS,to_char(START_TIME,'mm/dd/yy

SQL query to find Flash Recovery Area (FRA) usage

Please find the sql query below to get the used & free size of flash recovery area Both queries are same. First one gives the output in round integer. Second one gives the exact number. set lines 1000 col name for a35 col SPACE_LIMIT for a15 col SPACE_AVAILABLE for a15 col SPACE_USED for a15 col SPACE_RECLAIMABLE for a17 SELECT NAME,TO_CHAR(SPACE_LIMIT/1024/1024/1024,'999') AS SPACE_LIMIT,TO_CHAR(SPACE_USED/1024/1024/1024,'999') SPACE_USED, TO_CHAR(SPACE_LIMIT/1024/1024/1024 - SPACE_USED/1024/1024/1024+ SPACE_RECLAIMABLE/1024/1024/1024, '999') AS SPACE_AVAILABLE,TO_CHAR(SPACE_RECLAIMABLE/1024/1024/1024,'999') SPACE_RECLAIMABLE,NUMBER_OF_FILES,ROUND((SPACE_USED - SPACE_RECLAIMABLE)/SPACE_LIMIT * 100, 1)  AS PERCENT_FULL FROM V$RECOVERY_FILE_DEST; set lines 200 col NAME for a35 SELECT NAME,(SPACE_LIMIT/1024/1024/1024) AS SPACE_LIMIT,SPACE_USED/1024/1024/1024 AS SPACE_USED, (SPACE_LIMIT/1024/1024/1024) - (SPACE_US

RMAN backup fails with ORA-19513 and ORA-27206 error

Today we received a mail regarding RMAN backup of archive log to tape got failed from the customer. RMAN backup error details are below RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at 05/06/2014 20:05:31 ORA-19513: failed to identify sequential file ORA-27206: requested file not found in media management catalog From the above message we can easily identify that they are taking a backup to tape directly from the network server not from the db server. I have checked MOSC note related to the above error & found one document which is described below. The MOSC note is  " RMAN Backup to tape using Legato Networker fails ORA-19513 and ORA-27206 (Doc ID 1312448.1)" According to that note, CHANGES: A new DNS alias was created on the Network for the Server or new CAUSE: This error is raised whenever the Media Manager, Legato Networker in this case, is unable to write the file to tape. The creation of new DNS alias can