Wednesday, March 27, 2013

ASM Metadata Backup and Restore

ASMCMD md_backup and md_restore use to create backup/restore of asm disk group metadata information. This backup restores all the ASM directory structures, otherwise you have to create complete directory structure before restoring databases.

Syntax: md_backup -b <<backupfilename>> -G <<diskgroup>>
 
When you restore RMAN backup to a lost diskgroup or to a different server you will get errors something like

ORA-01119: error in creating database file ...
ORA-17502: ksfdcre:4 Failed to create file ...
ORA-15001: diskgroup "DATA" does not exist or is not mounted

You have two options to restore :

1. Use SET newname for datafile <<fileno#>> to <<new diskgroup>> or db_file_name_convert option to restore these files to new disk group.

2. Recreate ASM diskgroup manually and other user defined directory structures inside that diskgroup.
Let try this with this example.

Example: For this example I will create different directories paths and one tablespace ts1 with 2 datafiles on DATA2 disk group. We will take a tablespace backup, DATA2 diskgroup metadata backup. We will restore DATA2 and it's directory tree with md_restore and tablespace datafiles from the RMAN backup.


ASMCMD> cd DATA2
ASMCMD>mkdir mydir1
ASMCMD>mkdir mydir2
ASMCMD>ls -l
Type Redund Striped Time Sys Name
                                            N mydir2/
                                            N mydir1/

ASMCMD> cd mydir1
ASMCMD> cd mydir1
ASMCMD> ls -l
ASMCMD>mkdir ts1_dir
ASMCMD>mkdir ts2_dir
ASMCMD>ls -l
Type Redund Striped Time Sys Name
                                            N ts1_dir/
                                            N ts2_dir/

Create a tablespace and create one table inside it.
SQL> create tablespace ts1 datafile '+DATA2/test1.dbf' size 1m;
Tablespace created.

SQL> alter tablespace ts1 add datafile '+DATA2/ts2.dbf' size 2m;
Tablespace altered

SQL> connect scott/tiger

SQL> create table test tablespace ts1
as select * from user_objects;
Table created

SQL> select count(1) from test;
COUNT(1)
----------
7

Take the ASM DATA2 diskgroup metadata backup

ASMCMD> md_backup data2asm_backup -G DATA2
Disk group metadata to be backed up: DATA2
Current alias directory path: mydir1/ts2_dir
Current alias directory path: mydir1
Current alias directory path: mydir2
Current alias directory path: mydir1/ts1_dir
Current alias directory path: TEST
Current alias directory path: TEST/DATAFILEST/DATAFILE

ASMCMD> exit

$ ls -lt
-rw-r--r-- 1 oracle oinstall 13418 Nov 20 13:03 data2aasm_backup

Take RMAN tablespace ts1 backup with following commands.

RMAN> run {
2> allocate channel c1 type disk;
3> backup tablespace ts1 format "/backup/test/ts1_%s_%t";
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=51 instance=TEST1 devtype=DISK
Starting backup at 20-NOV-10
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00007 name=+DATA2/ts2.dbf
input datafile fno=00006 name=+DATA2/ts1.dbf
channel c1: starting piece 1 at 20-NOV-10
channel c1: finished piece 1 at 20-NOV-10
piece handle=/backup/test/ts1_11_735580273 tag=TAG20101120T155112 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 20-NOV-10
released channel: c1
RMAN>
RMAN>
RMAN> **end-of-file**
 
 
SQL> alter tablespace ts1 offline;
Tablespace altered.
 
Now drop the DATA2 disk group with force option.

$asmcmd
ASMCMD> dropdg data2
ORA-15039: diskgroup not dropped
ORA-15053: diskgroup "DATA2" contains existing files (DBD ERROR: OCIStmtExecute)
ASMCMD>dropdg -r data2
ASMCMD>
 
SQL>connect scott/tiger
 
SQL> select * from test;
select * from test
*
ERROR at line 1:
ORA-00376: file 6 cannot be read at this time
ORA-01110: data file 6: '+DATA2/ts1.dbf'

It's time to restore ts1 tablespace files from RMAN backup.

RMAN> run {
2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ;
3> restore tablespace ts1 ;
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=169 instance=TEST1 devtype=DISK
Starting restore at 20-NOV-10
channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to +DATA2/ts1.dbf
restoring datafile 00007 to +DATA2/ts2.dbf
channel c1: reading from backup piece /backup/test/ts1_11_735580273
ORA-19870: error reading backup piece /backup/test/ts1_11_735580273
ORA-19504: failed to create file "+DATA2/ts2.dbf"
ORA-17502: ksfdcre:3 Failed to create file +DATA2/ts2.dbf
ORA-15001: diskgroup "DATA2" does not exist or is not mounted <---- No diskgroup exists
ORA-15001: diskgroup "DATA2" does not exist or is not mounted <---- No such diskgroup exists
failover to previous backup
creating datafile fno=7 name=+DATA2/ts2.dbf
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/20/2010 15:57:13
ORA-01119: error in creating database file '+DATA2/ts2.dbf'
ORA-17502: ksfdcre:4 Failed to create file +DATA2/ts2.dbf
ORA-15001: diskgroup "DATA2" does not exist or is not mounted
ORA-15001: diskgroup "DATA2" does not exist or is not mounted
 
Lets use ASM md_restore command to create DATA2 diskgroup from backup. This will restore all the metadata information and create directory structure.
 
$ asmcmd
ASMCMD> md_restore disk2asm_backup
Current Diskgroup metadata being restored: DATA2
Diskgroup DATA2 created!
System template ONLINELOG modified!
System template AUTOBACKUP modified!
System template ASMPARAMETERFILE modified!
System template OCRFILE modified!
System template ASM_STALE modified!
System template OCRBACKUP modified!
System template PARAMETERFILE modified!
System template ASMPARAMETERBAKFILE modified!
System template FLASHFILE modified!
System template XTRANSPORT modified!
System template DATAGUARDCONFIG modified!
System template TEMPFILE modified!
System template ARCHIVELOG modified!
System template CONTROLFILE modified!
System template DUMPSET modified!
System template BACKUPSET modified!
System template FLASHBACK modified!
System template DATAFILE modified!
System template CHANGETRACKING modified!
Directory +DATA2/mydir1 re-created!
Directory +DATA2/TEST re-created!
Directory +DATA2/mydir2 re-created!
Directory +DATA2/mydir1/ts2_dir re-created!
Directory +DATA2/mydir1/ts1_dir re-created!
Directory +DATA2/TEST/DATAFILE re-created!

ASMCMD>
Restore tablespace ts1 datafiles from RMAN backups

RMAN> run {
2> allocate channel c1 type disk format '/backup/test/ts1_%s_%t' ;
3> restore tablespace ts1 ;
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=167 instance=TEST1 devtype=DISK
Starting restore at 20-NOV-10
channel c1: starting datafile backupset restore
channel c1: specifying datafile(s) to restore from backup set
restoring datafile 00006 to +DATA2/ts1.dbf
restoring datafile 00007 to +DATA2/ts2.dbf
channel c1: reading from backup piece /backup/test/ts1_11_735580273
channel c1: restored backup piece 1
piece handle=/backup/test/ts1_11_735580273 tag=TAG20101120T155112
channel c1: restore complete, elapsed time: 00:00:01
Finished restore at 20-NOV-10
released channel: c1
RMAN>
RMAN>

SQL> alter tablespace ts1 online;
alter tablespace ts1 online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '+DATA2/ts1.dbf'
 
SQL> recover tablespace ts1;
Media recovery complete.
SQL> alter tablespace ts1 online;
Tablespace altered.
SQL> alter tablespace ts1 online;
Tablespace altered.
SQL> connect scott/tiger
Connected.
SQL> select count(1) from test;
COUNT(1)
----------
7

Sunday, March 24, 2013

Multipath on Red Hat Linux

Discover ISCSI storage :

[root@test01 ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14       17769   142625070   8e  Linux LVM

<<No SAN storage is showing up., above is local storage>>

-Check to see if ISCSI service is running
     # service iscsi status
          iscsid (pid  6094) is running...

-By default ISCSI initiator is not enabled during boot. Enter the following command to enable at boot.

   #chkconfig --add iscsi
   #chkconfig iscsi on
   #chkconfig --list iscsi

-Discover ISCSI Storage target.

     iscsiadm -m discovery -t st -p <<ISCSI storage ipaddress>>

Example :

        #iscsiadm -m discovery -t st -p 10.0.5.50

-Connect to iscsi targets

      #iscsiadm -m node -l

You should see ISCSI volumes in the output.

-Check to see if both adaptors connected to array

    #iscsiadm -m session
      tcp: [3] 10.0.5.50:3260,1 iqn.2001-05.com....
      tcp: [4] 10.0.5.50:3260,1 iqn.2001-05.com...

-Restart the iscsi service.

    #service iscsi restart

    #fdisk -l
   Disk /dev/sda: 146.1 GB, 146163105792 bytes
   255 heads, 63 sectors/track, 17769 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
  /dev/sda1   *           1          13      104391   83  Linux
  /dev/sda2              14       17769   142625070   8e  Linux LVM

   Disk /dev/sdb: 214.7 GB, 214758850560 bytes                 <--- ISCSI Storage   255 heads, 63 sectors/track, 26109 cylinders
   Units = cylinders of 16065 * 512 = 8225280 bytes  
   Device Boot      Start         End      Blocks   Id  System

Setup Mulipathing:


Setup multipath of (/dev/sdb) through eth6 and eth7 interfaces.

- Create an interface file for each network interface.

#iscsiadm -m iface -l eth6 -new
New interface eth6 added

#iscsiadm -m iface -l eth7 -new
New interface eth7 added

-Update interface name
#iscsiadm -m iface -I eth6 --op=update -n iface.net_ifacename -v eth6
#iscsiadm -m iface -I eth7 --op=update -n iface.net_ifacename -v eth7

It will create 2 files( eth6,eth7) under /var/lib/iscsi/ifaces/

-Discover your storage volumes again

#iscsiadm -m discovery -t st -p 10.0.5.50

-Check your disks and you should see each volume disk twice.

#fdisk -l

Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14       17769   142625070   8e  Linux LVM

Disk /dev/sdb: 214.7 GB, 214758850560 bytes
255 heads, 63 sectors/track, 26109 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System      <--- Same disk twice means multipath enabled.

Disk /dev/sdc: 214.7 GB, 214758850560 bytes
255 heads, 63 sectors/track, 26109 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System


After reboot there is no guarentee that you will get above disk as /dev/sda. To address this you need use persistent device naming, before that to verify if multipath is configured correctly try following commands.

#multipath -v2
#multipath -ll
#

The above not showing any output. It means multipathing is not enabled. modify /etc/multipath.conf to enable mulipathing.

modify the following line in "mulipath.conf" file to enable mulipathing on /dev/sd? devices.

from

       blacklist {
        devnode "*"
       }

To

blacklist {
        devnode "^sd$"
}
now run the following command again to verify multipathing.

#multipath -v2
#multipath -ll
mpath6 (3560xty456000d6b31415c59a01b081) dm-6 EQLOGIC,100E-00
[size=200G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 12:0:0:0 sdb 8:16  [active][ready]
 \_ 13:0:0:0 sdc 8:32  [active][ready]

Multipath is enabled. Now set the persistent naming so whenever system boots we get the same device mapping. The UUID from above output never changes and we will use UUID for persistent name.

Open multipath.conf file and go to the multipath section and make the following changes.

multipaths {
       multipath {
               wwid                    3560xty456000d6b31415c59a01b081    <-- UUID from above
               alias                      backupdisk                                               <-- friendly name for the device
               path_grouping_policy    multibus
               #path_checker            readsector0
               path_selector           "round-robin 0"
               failback                manual
               rr_weight               priorities
               no_path_retry           5
               rr_min_io               15
       }
    

#

#multipath -v2
#multipath -ll
backupdisk (3560xty456000d6b31415c59a01b081) dm-2 EQLOGIC,100E-00
[size=200G][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 11:0:0:0 sdb 8:16  [active][ready]
 \_ 12:0:0:0 sdc 8:32  [active][ready]


The above outout shows the new label to backupdisk

#ls -l /dev/mapper

brw-rw---- 1 root disk 253,  2 Mar 22 11:55 backupdisk
crw------- 1 root root  10, 60 Mar 22 10:34 control
backupdisk is now a persistent name for /dev/sda or /dev/sdb devices.

-Create and mount filesystem on /dev/mapper/backupdisk

# mkfs.ext3 /dev/mapper/backupdisk
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
26230784 inodes, 52431360 blocks
2621568 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1601 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872
Writing inode tables: done
# mkdir /backup

edit the /etc/fstab and make add entry for /dev/mapper/backupdisk to mount on /backup after boot.

/dev/VolGroup00/LogVol00 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/VolGroup00/LogVol01 swap                    swap    defaults        0 0
/dev/mapper/backupdisk   /backup                 ext3    defaults        0 0
~
#mount /dev/mapper/backupdisk
#df -k
# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      69893520   2446488  63839276   4% /
/dev/sda1               101086     12952     82915  14% /boot
tmpfs                 49477868         0  49477868   0% /dev/shm
/dev/mapper/backupdisk
                     206432944    191892 195754780   1% /backup

- make sure to add these setting into /etc/rc.d/rc.local file to preserve these setting after restart.

The file should look like this :


touch /var/lock/subsys/local
/sbin/modprobe bonding
/sbin/service network restart
multipath -v2
multipath -ll
chown oracle:oinstall /dev/mapper/data*
chown oracle:oinstall /dev/mapper/back*
mount /backup