Saturday, November 20, 2010

11gR2 ASMCMD commands

chdg - Changes existing disk group (add disks, drop disks, or rebalance) based on XML configuration file. You can use ALTER DISKGROUP... commands for same too, but here we are learning ASMCMD commands extensions in 11gr2. The chdg command add disks, delete disks or set rebalance power level on an existing disk group.

          Syntax : chdg {config_file.xml | 'contents_of_xml_file'}

XML configuration template

<chdg> update disk clause (add/delete disks/failure groups)
      name disk group to change
      power power to perform rebalance

<add> items to add are placed here
</add>
<drop> items to drop are placed here
</drop>
<fg> failure group
name failure group name
</fg>
<dsk> diskname disk name
path disk path
size size of the disk to add
</dsk>
</chdg>
 
Example:
We will add disk /dev/disk/disk61 to existing disk group DISK and set rebalance power level to 4.
find existing disk in a disk group DATA

SQL> select name,path from v$asm_disk where group_number=1;
NAME          PATH
--------------- -----------------
DATA_0000 /dev/rdisk/disk50
DATA_0001 /dev/rdisk/disk51
DATA_0002 /dev/rdisk/disk60

Create following XML configuration file and save it as adddsk.xml

<chdg name="data" power="4">
<add>
<dsk string="/dev/rdisk/disk61"/>
</add>
</chdg>

and execute following

$asmcmd
ASMCMD>chdg adddsk.xml
ASMCMD>

Now check again to see disks in DATA disk group

SQL> select name,path from v$asm_disk where group_number=1;
NAME          PATH
--------------- -----------------
DATA_0000 /dev/rdisk/disk50
DATA_0001 /dev/rdisk/disk51
DATA_0002 /dev/rdisk/disk60
DATA_0003 /dev/rdisk/disk61 <--- New disk added

Lets drop this disk with chdg command. You can use ALTER DISKGROUP DATA DROP DISK command too.

Create a XML file

<chdg name="data" power="4">
<drop>
<dsk name="DATA_0003"/>
</drop>
</chdg>

and save it as dropdsk.xml and execute following.

$asmcmd
ASMCMD>chdg adddsk.xml
ASMCMD>

Now check again to see disks in DATA disk group

SQL> select name,path from v$asm_disk where group_number=1;
NAME          PATH
--------------- -----------------
DATA_0000 /dev/rdisk/disk50
DATA_0001 /dev/rdisk/disk51
DATA_0002 /dev/rdisk/disk60
 
DATA_003 disk name no longer exits!!!
 
chkdg - Checks or repairs a disk group.

The 11gR2 ASM CHECK command checks for
  • The disks consistency
  • The alias directory is linked correctly
  • All metadata directories and internal consistency of ASM disk group metadata.
It writes findings in alert logs and display them on database control page too.In 11gR2 the default is norepair

       Syntax : chkdg [--repair] <<diskgroupname>>

Example:

ASMCMD> chkdg data
ASMCMD>

The following are the contents from ASM alert log file
...
...
SQL> /* ASMCMD */ALTER DISKGROUP data CHECK NOREPAIR
NOTE: starting check of diskgroup DATA
kfdp_checkDsk(): 6
kfdp_checkDsk(): 7
kfdp_checkDsk(): 8
SUCCESS: check of diskgroup DATA found no errors
SUCCESS: /* ASMCMD */ALTER DISKGROUP data CHECK NOREPAIR
...
...
 
mkdg -Creates a disk group based on XML configuration file
 
     Syntax : mkdg {config_file.xml | 'contents_of_xml_file'}


XML configuration template

<dg> disk group name disk group name
  redundancy normal, external, high

<fg> failure group name failure group name
</fg>
<dsk> disk name disk name
path disk path
size size of the disk to add
</dsk>
<a> attribute
name attribute name
value attribute value
</a>
</dg>

 
Example: Create new disk group DATA2

First, create a XML configuration file with external redundancy and save it as mkdg.xml
 
<dg name="data2" redundancy="external">
<dsk string="/dev/rdisk/disk61"/>
<a name="compatible.rdbms" value="10.2"/>
</dg>

$ls -l mkdg.xml
-rw-r--r-- 1 oracle oinstall 86 Nov 20 10:59 mkdg.xml

$asmcmd
ASMCMD>mkdg mkdg.xml
ASMCMD>
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/
 
lsdsk - Lists Oracle ASM Disks. It runs in connected mode first and pulls information from v$ASM_DISK_STAT and v$ASM_DISK dynamic views otherwise it runs in disconnected mode and pulls information from disk header. The -I option forces non-connected mode.

Syntax :  lsdsk {-kptgMHI}{-G diskgroup } { --member|--candidate}
   {--discovery}{--statistics}{pattern}

-k:Displays the TOTAL_MB, FREE_MB, OS_MB,NAME, FAILGROUP, LIBRARY, LABEL, UDID, PRODUCT, REDUNDANCY, and PATH columns of the V$ASM_DISK view.

--statistics: Displays the READS, WRITES, READ_ERRS, WRITE_ERRS, READ_TIME, WRITE_TIME, BYTES_READ, BYTES_WRITTEN, and the PATH columns of the V$ASM_DISK view.

-p:Displays the GROUP_NUMBER, DISK_NUMBER, INCARNATION, MOUNT_STATUS, HEADER_STATUS, MODE_STATUS, STATE, and the PATH columns of the V$ASM_DISK view.

-t:Displays the CREATE_DATE, MOUNT_DATE, REPAIR_TIMER, and the PATH columns of the V$ASM_DISK view.

-g:Selects from GV$ASM_DISK_STAT, or from GV$ASM_DISK if the --discovery flag is also specified. GV$ASM_DISK.INST_ID is included in the output.

--discovery:Selects from V$ASM_DISK, or from GV$ASM_DISK if the -g flag is also specified. This option is always enabled if the Oracle ASM instance is version 10.1 or earlier. This flag is disregarded if lsdsk is running in non-connected mode.

-H:Suppresses column headings.

-I:Scans disk headers for information rather than extracting the information from an Oracle ASM instance. This option forces non-connected mode.

-G:Restricts results to only those disks that belong to the group specified by diskgroup.

-M:Displays the disks that are visible to some but not all active instances. These are disks that, if included in a disk group, cause the mount of that disk group to fail on the instances where the disks are not visible.

--candidate: Restricts results to only disks having membership status equal to CANDIDATE.

--member:Restricts results to only disks having membership status equal to MEMBER.

pattern: Returns only information about the specified disks that match the supplied pattern.

Example 1:

$ asmcmd
ASMCMD> lsdsk
Path
/dev/rdisk/disk50
/dev/rdisk/disk51
/dev/rdisk/disk60
/dev/rdisk/disk61

Example 2:
The following command display disk attached to disk group DATA2 and their space information.

ASMCMD> lsdsk -k -G DATA2
Total_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path
76800 76750 76800 DATA2_0000 DATA2_0000 System UNKNOWN /dev/rdisk/disk61

Example 3:
The following shows io statistics for disks in DATA2 disk group

ASMCMD> lsdsk -t -G DATA2 --statistics
Reads Write Read_Errs Write_Errs Read_time Write_Time Bytes_Read Bytes_Written Voting_File Create_Date Mount_Date Repair_Timer Path
18 447 0 0 .026287 3.841985 77824 1830912 N 20-NOV-10 20-NOV-10 0 /dev/rdisk/disk61

Example 4:
The following displays disks attached to DATA2 and DATA disk groups

ASMCMD> lsdsk -G DATA2
Path
/dev/rdisk/disk61
ASMCMD> lsdsk -G DATA
Path
/dev/rdisk/disk50
/dev/rdisk/disk51
/dev/rdisk/disk60
ASMCMD>
 
dropdg -Drops a disk group. DROP diskgroup command marks the headers of disks belonging to a diskgroup that cannot be mounted by ASM as FORMER. If diskgroup is being used by any other nodes or ASM instance then this dropdg command fails.

The -r (INCLUDING CONTENTS) option of dropdg will drop the diskgroup and files if diskgroup is empty . The -f(Force) with INCLUDING CONTENTS should be used with caution as this will not check if diskgroup is being used by any other ASM instance and it will clear all disks in that diskgroup.

   Syntax:  dropdg { -r -f } { -r } <<diskgroup>>
 

Example:

ASMCMD> dropdg data2
ORA-15039: diskgroup not dropped
ORA-15053: diskgroup "DATA2" contains existing files (DBD ERROR: OCIStmtExecute)
ASMCMD>dropdg -r data2
ASMCMD>
 
iostat -Displays I/O statistics for disks.

lsdg -Displays disk groups and their information.The lsdg command queries V$ASM_DISKGROUP_STAT by default. If the --discovery flag is specified, the V$ASM_DISKGROUP is queried instead.

Example:

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/


umount -Dismounts a disk group

       Syntax:  umount { -a | [-f] diskgroup }

-a Dismounts all mounted disk groups.


-f Forces the dismount operation.


Example: The following example first checks the disk group with lsdg command and then unmount the data2 diskgroup. You will see data2 is unmounted if you run lsdg command again.

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/
ASMCMD> umount data2
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
ASMCMD>
 
mount : Mounts a disk group.You can mount ASM diskgroup in restrict mode for mainitance/rebalnace operations and during this mode client cannot access files in that diskgroup. If you are running RAC then MOUNT RESTRICT will mount diskgroup exclusively on that instance and clients cannot access files in that diskgroup until it mounted back in normal mode.

Why in restricted mode?
It improve the rebalance operation performace as there are no external connections to the disk group.

Syntax: mount [--restrict] { [-a] | [-f] diskgroup[ diskgroup ...] }

-a Mounts all disk groups.

--restrict Mounts in restricted mode.

-f Forces the mount operation.

Example:
In the previous example of unmount command we left DATA2 in unmounted stage. lets mount DATA2 disk group in restrict mode again and then unmount and mount in normal mode.

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
ASMCMD>

ASMCMD> mount --restrict DATA2
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
RESTRICTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/
The state in above showing output showing RESTRICTED for DATA2
ASMCMD> umount data2
ASMCMD> mount data2
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
MOUNTED EXTERN N 1024 4096 1048576 76800 76750 0 76750 0 N DATA2/
DATA2 is removed from RESTRICTED mode.
 
offline - Offline disks or failure groups that belong to a disk group. You won't be able to take disk offline in a disk group with external redundancy

Syntax: offline -G diskgroup { -F failgroup |-D disk} [-t {minutes | hours}]

-G diskgroup Disk group name.


-F failgroup Failure group name.


-D disk Specifies a single disk name.


-t minutes | hours Specifies the time before the specified disk is dropped as nm or nh, where m specifies minutes and h specifies hours. The default unit is hours.


Example:
Lets add a disk to disk group2 with chdg command.

ASMCMD> chdg adddsk.xml
ASMCMD> lsdsk -G DATA2
Path
/dev/rdisk/disk61
/dev/rdisk/disk62 <-- New disk added
ASMCMD>
ASMCMD> lsdsk -k -G data2
Total_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path
76800 76774 76800 DATA2_0000 DATA2_0000 System UNKNOWN /dev/rdisk/disk61
76800 76774 76800 DATA2_0001 DATA2_0001 System UNKNOWN /dev/rdisk/disk62
ASMCMD> offline -G data2 -D data2_0001
ORA-15067: command or option incompatible with diskgroup redundancy (DBD ERROR: OCIStmtExecute)
ASMCMD>
 
 
online - Online all disks, a single disk, or a failure group that belongs to a disk group.
 
 Syntax : online { [-a] -G diskgroup | -F failgroup |-D disk} [-w]

-a Online all offline disks in the disk group.


-G diskgroup Disk group name.


-F failgroup Failure group name.


-D disk Disk name.


-w Wait option. Causes ASMCMD to wait for the disk group to be rebalanced before returning control to the user. The default is not waiting.

 
rebal - Rebalances a disk group and it's useful if you have added some disks to a diskgroups to load balance I/O.The power level can be set from 0 to 11. A value of 0 disables rebalancing. If the rebalance power is not specified, the value defaults to the setting of the ASM_POWER_LIMIT initialization parameter.
You can determine if a rebalance operation is occurring with the ASMCMD lsop command

Syntax: rebal [--power power] [-w] diskgroup

--power power Power setting (0 to 11).


-w Wait option. Causes ASMCMD to wait for the disk group to be rebalanced before returning control to the user. The default is not waiting.

Example:
The following example rebalance the data2 disk group power level set to 4 from 0.
 
ASMCMD> lsop
Group_Name Dsk_Num State Power <--- means no rebalance activity is going on
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 1024 4096 1048576 80896 78196 0 78196 0 N DATA/
MOUNTED EXTERN N 1024 4096 1048576 153600 153548 0 153548 0 N DATA2/
ASMCMD>
ASMCMD> rebal --power 4 data2
ASMCMD> lsop
Group_Name Dsk_Num State Power
DATA2 REBAL WAIT 4 <--- rebalance is currently running...
ASMCMD> lsop
Group_Name Dsk_Num State Power <--- means no rebalance activity completed.

The STATE can be one of the followings:
  - Wait : No rebalance is running or wait period is specified by Admins
  - Run   : Rebalance is running.
  - REAP : Rebalance operation stopped.
  - HALT : Halted by Admins.
  - ERRORS : Errors during rebalance operations and halted.
 

md_backup, md_restore: Create backup file on a filesystem for asm disk group metadata information you can restore this backup file by md_restore command of ASMCMD.

Syntax: md_backup -b <<backupfilename>> -G <<diskgroup>>
 
When you restore RMAN backup to a lost diskgroup or to a different server you will get errors something like

ORA-01119: error in creating database file ...
ORA-17502: ksfdcre:4 Failed to create file ...
ORA-15001: diskgroup "DATA" does not exist or is not mounted

You have two options to restore :

1. Use SET newname for datafile <<fileno#>> to <<new diskgroup>> or db_file_name_convert option to restore these files to new disk group.

2. Recreate ASM diskgroup manually and other user defined directory structures inside that diskgroup.
Let try this with this example.

Example: For this example I will create different directories paths and one tablespace ts1 with 2 datafiles on DATA2 disk group. We will take a tablespace backup, DATA2 diskgroup metadata backup. We will restore DATA2 and it's directory tree with md_restore and tablespace datafiles from the RMAN backup.


ASMCMD> cd DATA2
ASMCMD>mkdir mydir1
ASMCMD>mkdir mydir2
ASMCMD>ls -l
Type Redund Striped Time Sys Name
                                            N mydir2/
                                            N mydir1/

ASMCMD> cd mydir1
ASMCMD> cd mydir1
ASMCMD> ls -l
ASMCMD>mkdir ts1_dir
ASMCMD>mkdir ts2_dir
ASMCMD>ls -l
Type Redund Striped Time Sys Name
                                            N ts1_dir/
                                            N ts2_dir/

Create a tablespace and create one table inside it.
SQL> create tablespace ts1 datafile '+DATA2/test1.dbf' size 1m;
Tablespace created.

SQL> alter tablespace ts1 add datafile '+DATA2/ts2.dbf' size 2m;
Tablespace altered

SQL> connect scott/tiger

SQL> create table test tablespace ts1
as select * from user_objects;
Table created

SQL> select count(1) from test;
COUNT(1)
----------
7

Take the ASM DATA2 diskgroup metadata backup

ASMCMD> md_backup data2asm_backup -G DATA2
Disk group metadata to be backed up: DATA2
Current alias directory path: mydir1/ts2_dir
Current alias directory path: mydir1
Current alias directory path: mydir2
Current alias directory path: mydir1/ts1_dir
Current alias directory path: TEST
Current alias directory path: TEST/DATAFILEST/DATAFILE

ASMCMD> exit

$ ls -lt
-rw-r--r-- 1 oracle oinstall 13418 Nov 20 13:03 data2aasm_backup

Take RMAN tablespace ts1 backup with following commands.

RMAN> run {
2> allocate channel c1 type disk;
3> backup tablespace ts1 format "/backup/test/ts1_%s_%t";
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=51 instance=TEST1 devtype=DISK
Starting backup at 20-NOV-10
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00007 name=+DATA2/ts2.dbf
input datafile fno=00006 name=+DATA2/ts1.dbf
channel c1: starting piece 1 at 20-NOV-10
channel c1: finished piece 1 at 20-NOV-10
piece handle=/backup/test/ts1_11_735580273 tag=TAG20101120T155112 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 20-NOV-10
released channel: c1
RMAN>
RMAN>
RMAN> **end-of-file**
 
 
SQL> alter tablespace ts1 offline;
Tablespace altered.
 
Now drop the DATA2 disk group with force option.

$asmcmd
ASMCMD> dropdg data2
ORA-15039: diskgroup not dropped
ORA-15053: diskgroup "DATA2" contains existing files (DBD ERROR: OCIStmtExecute)
ASMCMD>dropdg -r data2
ASMCMD>
 
SQL>connect scott/tiger
 
SQL> select * from test;
select * from test
*
ERROR at line 1:
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: '+DATA2/ts1.dbf'

It's time to restore ts1 tablespace files from RMAN backup.

RMAN> run {
2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ;
3> restore tablespace ts1 ;
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=169 instance=TEST1 devtype=DISK
Starting restore at 20-NOV-10
channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to +DATA2/ts1.dbf
restoring datafile 00007 to +DATA2/ts2.dbf
channel c1: reading from backup piece /backup/test/ts1_11_735580273
ORA-19870: error reading backup piece /backup/test/ts1_11_735580273
ORA-19504: failed to create file "+DATA2/ts2.dbf"
ORA-17502: ksfdcre:3 Failed to create file +DATA2/ts2.dbf
ORA-15001: diskgroup "DATA2" does not exist or is not mounted <---- No diskgroup exists
ORA-15001: diskgroup "DATA2" does not exist or is not mounted <---- No such diskgroup exists
failover to previous backup
creating datafile fno=7 name=+DATA2/ts2.dbf
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/20/2010 15:57:13
ORA-01119: error in creating database file '+DATA2/ts2.dbf'
ORA-17502: ksfdcre:4 Failed to create file +DATA2/ts2.dbf
ORA-15001: diskgroup "DATA2" does not exist or is not mounted
ORA-15001: diskgroup "DATA2" does not exist or is not mounted
 
Lets use ASM md_restore command to create DATA2 diskgroup from backup. This will restore all the metadata information and create directory structure.
 
$ asmcmd
ASMCMD> md_restore disk2asm_backup
Current Diskgroup metadata being restored: DATA2
Diskgroup DATA2 created!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template ASMPARAMETERFILE modified!
System template OCRFILE modified!
System template ASM_STALE modified!
System template OCRBACKUP modified!
System template PARAMETERFILE modified!
System template ASMPARAMETERBAKFILE modified!
System template FLASHFILE modified!
System template XTRANSPORT modified!
System template DATAGUARDCONFIG modified!
System template TEMPFILE modified!
System template ARCHIVELOG modified!
System template CONTROLFILE modified!
System template DUMPSET modified!
System template BACKUPSET modified!
System template FLASHBACK modified!
System template DATAFILE modified!
System template CHANGETRACKING modified!
Directory +DATA2/mydir1 re-created!
Directory +DATA2/TEST re-created!
Directory +DATA2/mydir2 re-created!
Directory +DATA2/mydir1/ts2_dir re-created!
Directory +DATA2/mydir1/ts1_dir re-created!
Directory +DATA2/TEST/DATAFILE re-created!

ASMCMD>
Restore tablespace ts1 datafiles from RMAN backups

RMAN> run {
2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ;
3> restore tablespace ts1 ;
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=167 instance=TEST1 devtype=DISK
Starting restore at 20-NOV-10
channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to +DATA2/ts1.dbf
restoring datafile 00007 to +DATA2/ts2.dbf
channel c1: reading from backup piece /backup/test/ts1_11_735580273
channel c1: restored backup piece 1
piece handle=/backup/test/ts1_11_735580273 tag=TAG20101120T155112
channel c1: restore complete, elapsed time: 00:00:01
Finished restore at 20-NOV-10
released channel: c1
RMAN>
RMAN>

SQL> alter tablespace ts1 online;
alter tablespace ts1 online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '+DATA2/ts1.dbf'
 
SQL> recover tablespace ts1;
Media recovery complete.
SQL> alter tablespace ts1 online;
Tablespace altered.
SQL> alter tablespace ts1 online;
Tablespace altered.
SQL> connect scott/tiger
Connected.
SQL> select count(1) from test;
COUNT(1)
----------
7
 
cp - It's going to make your life so easy when moving database across different servers. It allows you to copy files between ASM diskgroup and OS filesystem. In eairler release you have to use either RMAN command or setup FTP to move files between.

10g Example:

In 10gR2 this is how you need to setup FTP with Oracle XMLDB

- Connect to Oracle instance as sys and execute

@ORACLE_HOME/rdbms/admin/catxdbdbca 7777 8080
This will enable ftp on port 7777 and http service on port 8080
- use ftp to move files between ASM and filesystem

FTP> open <<hostname>> 7777

331 pass required for SYSTEM
Password:
230 SYSTEM logged in
ftp>

Relax! in 11g you can move files just by using cp command
11gR2 example
 
11gR2 Example:

$ ls -l
-rw-r----- 1 oracle oinstall 212992 Nov 20 15:51 ts1_11_735580273

$ asmcmd
ASMCMD> cp /backup/test/ts1_11_735580273 +DATA/
copying /backup/test/ts1_11_735580273 -> +DATA/ts1_11_735580273
ASMCMD> cd +DATA
ASMCMD> ls -l
Type Redund Striped Time Sys Name
Y ASM/
Y TEST/
N archlogs/
Y test-mvip/
N ts1_11_735580273 => +DATA/ASM/BACKUPSET/ts1_11_735580273.304.735585509
ASMCMD>
 

Saturday, November 13, 2010

Duplicate 10gR2 RAC database instances to non-RAC single database instance

In this blog we will discuss how to duplicate RAC database, which is running on ASM, to non-ASM single node database instance.

Connect to your source RAC database and find out how much space you need for your target database filesystem.

SQL> select sum(bytes)/(1024)/1024/1024 as "Size in GB" from v$datafile;

Size in GB
---------------------------
63.06

SQL> select sum(bytes)/(1024)/1024/1024 as "Size in GB" from v$tempfile;

Size in GB
---------------------------
5.76

So we need approximately around 69 GB and lets see how much space ASM diskgroups displays on source database. I will be restoring these ASM Oracle database files into /u02 filesystem.

SQL> select name,total_mb/1024 as "Total Size(GB)",Free_mb/(1024) "Free(GB)" from v$asm_diskgroup;

NAME                                Total Size (GB)   Free(GB)
------------------------------ ------------------   --------------
ARCHLOGS                          10.98               10.82
DATA                                   100.48              20.62
TEMPFILE                            20.98               14.10


Connect to RAC database node and create scripts to change file location from ASM to non-ASM(/u02) . You can ignore this step if you are cloning to ASM with same DISKGROUP name.

SQL > connect  system/oracle@RAC-node1
SQL > spool clone.rman
SQL > select 'set newname for datafile '||file#||' to '''||'/u02/'||substr(name,'20','40')||''';' from v$datafile
            union
           select 'set newname for tempfile '||file#||' to '''||'/u02/'||substr(name,'20','40')||''';' from v$tempfile;
SQL>         
set newname for datafile 1 to '/u02/system.ora';
set newname for datafile 2 to '/u02/sysaux.ora';
set newname for datafile 3 to '/u02/tools.ora';
set newname for datafile 4 to '/u02/undotbs1.ora';
set newname for datafile 5 to '/u02/undotbs2.ora';
set newname for datafile 6 to '/u02/users.ora';
set newname for datafile 7 to '/u02/indx.ora';
set newname for datafile 8 to '/u02/undotbs2.ora';
set newname for datafile 9 to '/u02/xdb.ora';
set newname for tempfile 2 to '/u02/temp1';
SQL> spool off;

We have full and archlogs backups on disk(/backup) of  RAC database on shared filesystem. FTP your backups from source /backup location to target server on /backup.

 In this example we are cloning it to ORCL auxiliary intance on a seperate server, though you can clone it on one node of RAC servers. Create an auxliary instance ORCL pfile on your target server $ORACLE_HOME/dbs/ directory

Please copy clone.rman to target server location and make following changes and after changes your clone.rman script will look something like this.

In this example I am cloning to the point in time so I added the following line inside clone.rman script

    set until time "to_date('10-Nov-2009 14:30:00', 'DD-MON-YYYY HH24:MI:SS')";

You can comment this line if you want to clone up to the point of last backup.

run
{
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;

set newname for datafile 1 to '/u02/system.ora';
set newname for datafile 2 to '/u02/sysaux.ora';
set newname for datafile 3 to '/u02/tools.ora';
set newname for datafile 4 to '/u02/undotbs1.ora';
set newname for datafile 5 to '/u02/undotbs2.ora';
set newname for datafile 6 to '/u02/users.ora';
set newname for datafile 7 to '/u02/indx.ora';
set newname for datafile 8 to '/u02/undotbs2.ora';
set newname for datafile 9 to '/u02/xdb.ora';
set newname for tempfile 2 to '/u02/temp1';
set until time "to_date('10-Nov-2009 14:35:00', 'DD-MON-YYYY HH24:MI:SS')";

duplicate target database to ORCL
logfile
group 1 ('/u02/log01.log') size 50m,
group 2 ('/u02/log02.log') size 50m,
group 3 ('/u02/log03.log') size 50m;
}


Set the auxiliary instance SID and connect to catalog, target instance, auxiliary instance and run clone.rman script.

$ export ORACLE_SID=ORCL
$ sqlplus /nolog
   startup nomount;

$ORACLE_HOME/bin/rman target rman_user/rmanuser@rac-node1 rcvcat rman_catlog/rman_catalog@catalog auxiliary /

run
{
allocate auxiliary channel c1 device type disk;
allocate auxiliary channel c2 device type disk;

set newname for datafile 1 to '/u02/system.ora';
set newname for datafile 2 to '/u02/sysaux.ora';
set newname for datafile 3 to '/u02/tools.ora';
set newname for datafile 4 to '/u02/undotbs1.ora';
set newname for datafile 5 to '/u02/undotbs2.ora';
set newname for datafile 6 to '/u02/users.ora';
set newname for datafile 7 to '/u02/indx.ora';
set newname for datafile 8 to '/u02/undotbs2.ora';
set newname for datafile 9 to '/u02/xdb.ora';
set newname for tempfile 2 to '/u02/temp1';

set until time "to_date('10-Nov-2010 14:35:00', 'DD-MON-YYYY HH24:MI:SS')";

duplicate target database to ORCL
logfile
group 1 ('/u02/log01.log') size 50m,
group 2 ('/u02/log02.log') size 50m,
group 3 ('/u02/log03.log') size 50m;
}

RMAN will start restoring datafile to new location and recover upto the specified parameter in to_date.

While rman is ready to open database with

alter database open resetlogs

it fails with

ORACLE error from auxiliary database:
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled

This is due to bug: 4355382 ORA-38856: Failed  to open database with resetlogs when using RAC backup in 10g R2. The target database is a RAC database being duplicated to a single instance database
and the problem is related to the number of threads used by the source database and an expectation that the cloned database must have an identical number of threads.

To fix this problem do this

Set the following parameter in the auxiliary init.ora file:
_no_recovery_through_resetlogs=TRUE

Then open auxiliary database with resetlogs;

Once the auxiliary has opened, removed this hidden parameter

Your single instance non-asm clone is ready for your RAC database.

Monday, November 8, 2010

Creating Oracle 10g RAC database on 11g Grid Infrastructure

In Oracle 11g r2 grid Infrastructure software the ASM and clusterware has been combined into one Oracle home and ASM is part of  clusterware. Clusterware files such as OCR and Voting disks are created under ASM diskgroups. Unlike, In Oracle 10g where voting disks and OCR must be configured before ASM installation they were either part of OCFS(linux) or shared raw filesystem(Solaris).

Please check Pre-11g database version compatibility with 11g Grid Infrastructure before Installing for more information please check MOS ID: 337737.1.

The following certification matrix is from MOS ID : 337737.1
I will be installing 10rR2 (10.2.0.4) Database in 11g R2 Grid Infrastructure. I am assuming that you have already installed 11gR2 Grid Infrastructure and ASM is up and running.

$srvctl status asm
ASM is running on node1, node2

Go to Oracle Database software location and Install database software(10.2.0.1).

$ /usr/orasw/10g_software/database/runInstaller


Run root.sh on node1, node2 as root

Before creating 10g database I decided to install 10.2.0.4 patch set 3 on top of 10.2.0.1 

login as oracle software owner and install patch set 3

$ /usr/orasw/10g_software/10204_patch/Disk1/runInstaller



Oracle 10g Software with patch set 3 has been installed with no issues. It’s time to create RAC database.

Start DBCA from 10g $ORACLE_HOME/bin

$ cd $ORACLE_HOME/bin
$ dbca









Where is my Configuration Assistant page for Real application clusters?


Humm...not looking right as DB configuration assistant is not showing cluster database selection page. Anyway just to see what happens next  I decided to move forward and selected ASM Storage Management for database files.








It failed at 2% with “ORA-29702” . I have aborted the installation.



Checked cluster nodes status.

$ olsnodes -t
node1        Unpinned
node2        Unpinned

If the cluster is a newly created 11.2 Grid Infrastructure (CRS), and you want to create pre 11.2 RAC db, you need to pin the corresponding nodes. In our case their status are showing as Unpinned. If Grid Infrastructure was upgraded to 11.2, then you no need to pin. Read more Installing Pre 11.2 Database in 11GR2 Grid Infrastructure environment at My Oracle Support Document Id: 948456.1

 $ crsctl pin css -n node1 node2
/usr/lib/hpux64/dld.so: Unable to find library 'libhasgen10.so'.

Make sure to login as root before making any cluster related changes.
$su –

# crsctl pin css -n node1 node2

CRS-4664: Node node1 successfully pinned.
CRS-4664: Node node2 successfully pinned.

#exit

$ olsnodes -t
node1        Pinned
node2        Pinned

Our nodes are showing Pinned status 

now start dbca again from ORACLE_HOME/bin
$ cd $ORACLE_HOME/bin
$ dbca




Now I am able to see RAC database screen.. So far so good.







It failed again with the error that DBCA is unable to communicate to ASM even-though ASM is up and running.

“ORA-15260: permission denied on ASM disk group”

I decided to abort the installation the installation and looked into MOS for possible solutions and this is what I came across.

Due to unpublished bug 8288940, DBCA will fail if database files are located in ASM. Oracle's recommendation is to apply 8288940 patch. 

I download and appled patch 8288940 on top of 10.2.0.4. Please follow instructions in README.txt file for patch installation steps.

started dbca again for 10gR2 ORACLE_HOME

$ cd $ORACLE_HOME/bin
$ dbca 













Now installer is prompting to provide ASM password.  






I can see DATA disk group that I have created during 11gR2 grid Infrastructure installation.
Finally, After selecting next in next couple of screens I am able to create 10g DB on 11Gr2 Grid infrastructure.

Please look into my other blogs on Oracle 11gR2 Grid Infrastructure installation steps and read more on Installing Pre -11.2 Database in 11GR2 Grid Infrastructure environment at  My Oracle Support Document Id: 948456.1

Thank you and good luck!

Thursday, November 4, 2010

Oracle Procedural Gateway: data cartridge error/MQI MQPUT failed. completion code=2, reason code=2148

Oracle Procedural Gateway for MQ started giving following errors after applying 10.2.0.4 patch on 10.2.0.1 Oracle Procedural Gateway Oracle Home (ORACLE_HOME) and execution of $ORACLE_HOME/pg4mq/admin/deploy/pgmdeploy.sql

MQPUT returned with reason code 0
ORA-29400: data cartridge error MQI MQPUT failed.
Completion code=2, reason code=2148

In my installation 10gR2 Procedural gateway running on it's own home directory

Procedural Gateway logs showing "version: 2" instead of "version: 1"

MQI: =================================
MQI: StrucId : 'OD '
MQI: Version: 2 <-----
MQI: ObjectType : 1

After digging into Reason code=2148 It turn out to be.

An MQPUT or MQPUT1 call was issued, but the message data contains an MQIIH structure that is not valid.

Possible errors include the following:

The StrucId field is not MQIIH_STRUC_ID.
The Version field is not MQIIH_VERSION_1.
The StrucLength field is not MQIIH_LENGTH_1.
The BufferLength parameter of the call has a value that is too small to accommodate the structure (the structure extends beyond the end of the message). This reason code occurs in the following environments: AIX, HP-UX, z/OS, OS/2, i5/OS, Solaris, Windows, plus WebSphere MQ clients connected to these systems.


I tried different ways to fix this version issue.

Option 1:-
Opened following packages under SYS and tried after changing on package at a time.
a. On PGM_SUP package change following at line 77
MQGMO_CURRENT_VERSION constant binary_integer := 2;
to
MQGMO_CURRENT_VERSION constant binary_integer := 1;

b. On PGM package search for PGM_SUP.MQOD_VERSION_2 and change it to PGM_SUP.MQOD_VERSION_1 after this change I tried to rerun my Gateway routine and it worked!

Option 2:-
a. Upgraded 10g procedural gateway for webspere MQ to 11gR2 and re-ran "pgmundeploy.sql" and "pgmdeploy.sql” and after some small listener configuration changes things started working...


I wanted to try again on 10.2.0.1 base release, but since it’s working fine after upgrading to 11g so I decided to stay with 11gR2 Procedural Gateway with 10g database.