Skip to main content

Steps/procedure for Adding new disk to ACFS disk group or extending/increasing the size of ACFS file system diskgroup

Hello Guys,

Here we are going to see the steps to add new disk to ACFS file system disk group or How to extend/increase the size of ACFS file system diskgroup (ACFS - Oracle Automatic Storage Management Cluster File System).

This is also real time scenario of my environment.

We received critical alert on ACFS file system mount point /prodtop which is having only 10GB free space and 90% of disk group space has been used. So customer requested us to increase size of /prodtop to another 10GB.

Before extending the diskgroup

[root@node1: ~]# df -Th /prodtop/
Filesystem                    Type      Size  Used Avail Use% Mounted on
/dev/asm/prodtop-153  ACFS    95G   85G   11G  90%    /prodtop


So here we go.

1) Ask OS team to provide a new partitioned disk size of 10GB which should be shareable across all nodes if it is a clustered environment.

For example
***********
We got the below partitioned disk.

Disk /dev/xvdj: 10.7 GB, 10737418240 bytes

    Device Boot      Start         End      Blocks     Id  System
/dev/xvdj1               1           1305    10482381   83  Linux

Please follow the below steps once disk presented and partitioned by Support team.


1           1) Create the disk using ASMLIB
#on node1
# /usr/sbin/oracleasm createdisk PRODTOP02 /dev/xvdj1
Writing disk header: done
Instantiating disk: done
# (this command needs to be executed as root user)

2)      Since this is a RAC installation, the disks would only be stamped by one node. The other node would just scan for the disks.
#on node2
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "PRODTOP02"
# (this command needs to be executed as root user)

3)      Now connect to the ASM instance as sysasm and add the disk to existing diskgroup
#on node1 or node2
SQL> alter diskgroup PRODTOP add disk '/dev/oracleasm/disks/PRODTOP02' NAME PRODTOP_0001;
Diskgroup altered.

4)    Validate from asmcmd:
 [oracrs@node1: ~]# asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU         Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576    112635      15255                0           15255              0             N  PRODTOP/

  
5)    Extend ACFS File System:(as root user)
#on node1
[root@node1: ~]# /sbin/acfsutil size +10240M /prodtop 
acfsutil size: new file system size: 112742891520 (107520MB)

6)    Final Validate From Both Nodes:
#for node 1
[root@node1: ~]# df -Th /prodtop/
Filesystem                        Type       Size    Used Avail  Use%  Mounted on
/dev/asm/prodtop-153   ACFS      105G   85G   21G  81%    /prodtop
 
#for node2
[root@node2: ~]# df -Th /prodtop/
Filesystem                        Type       Size    Used Avail  Use%  Mounted on
/dev/asm/prodtop-153   ACFS      105G   85G   21G  81%    /prodtop



Other useful commands to get the details of ACFS disk group using SQL command from ASM instance.

set lines 500
col FS_NAME for a10
col VOL_DEVICE for a30
col VOL_LABEL for a10
select * from v$asm_acfsvolumes;

FS_NAME    VOL_DEVICE                     VOL_LABEL             PRIMA   TOTAL_MB    FREE_MB
----------   ------------------------------           ---------------------            -----           ----------        ----------
/prodtop   /dev/asm/prodtop-153                                                     TRUE       97280            11752.1875
/stage     /dev/asm/stage-984                                                             TRUE      97280          39816.6602


select GROUP_NUMBER,DISK_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB from v$asm_disk;

GROUP_NUMBER DISK_NUMBER  NAME                          STATE      TOTAL_MB    FREE_MB
------------                         -----------       -----------------                  --------        ----------          ----------
           4                              0                   STAGE_0000                    NORMAL       102399       5021
           3                              0                   PRODTOP_0000              NORMAL       102399       5021
           1                              2                   GRID_0002                      NORMAL         5119         4811
           1                              1                   GRID_0001                      NORMAL         5119         4810
           1                              0                   GRID_0000                      NORMAL         5119        4810

 
 
select GROUP_NUMBER,NAME,STATE,TOTAL_MB,FREE_MB,COMPATIBILITY from v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE         TOTAL_MB    FREE_MB COMPATIBILITY
------------ ------------------------------ ----------- ---------- ---------- ---------------
           1 GRID                           MOUNTED          15357      14431 11.2.0.0.0
           3 PRODTOP                        MOUNTED         102399       5021 11.2.0.0.0
           4 STAGE                          MOUNTED         102399       5021 11.2.0.0.0


Comments

Popular posts from this blog

Export and import multiple schema using expdp/impdp (Data Pump utility)

Use the below sql query to export and import multiple schema: expdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp exclude=statistics logfile=expdpschemas120514.log impdp schemas=schema1,schema2,schema3 directory=DATA_PUMP_DIR dumpfile=schemas120514bkp.dmp logfile=impdpschemas120514.log sql query to export and import a schema: expdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp exclude=statistics logfile=expdpschema120514.log impdp schemas=schema directory=DATA_PUMP_DIR dumpfile=schema120514bkp.dmp logfile=expdpschema120514.log Parameter STATISTICS=NONE can either be used in export or import. No need to use the parameter in both. To export meta data only to get ddl of the schemas: expdp schemas=schema1,schema2,schema3 directory=TEST_DIR dumpfile=content.dat content=METADATA_ONLY exclude=statistics To get the DDL in a text file: impdp directory=TEST_DIR sqlfile=sql.dat logfile=sql.log dumpfile=content12051

Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan

Deal all, Below post content is not my own. It is taken from MOSC document but the example shown below is taken from our environment. I got to know the below topic when we had experienced the bad SQL query performance in one of our production environment during daily batch running time. Hence we had implemented the same mentioned in the title "Pinning execution plan for a sql_id using 'SQL Profile' to consistently reproduce a good plan". Introduction The "cost-based optimizer (CBO or just optimizer) is the part of database software that determines the most efficient means of accessing the specified data (or at least what it "calculates" as the most efficient). The optimizer determines which execution plan is most efficient by considering several sources of information, including query conditions, available access paths, statistics gathered for the system, and hints. Some changes in those conditions may affect the calculations of the plan

SQL query to find the cause or reason for more archive log generation

Finding reason or cause for heavy or more archive log generation in a particular time period As i said in the previous post we are going to see SQL queries to find the cause or reason for more archive log generation in a problematic window... Sometimes customer would ask the sql query which generated more archive logs sometimes before or days before (not for current archive logs generation which is described in the previous post). In such scenarios, follow the steps below. Step 1: ====== First you must know the timing when more number of archive logs generated in the databases. for that you can use the below query. Below sql query gives how many number of archive logs generated for each and every hour... col day for a12 set lines 1000 set pages 999 col "00" for a3 col "01" for a3 col "02" for a3 col "03" for a3 col "04" for a3 col "05" for a3 col "06" for a3 col "07" for