In my case it was a permission issue with one of the disk to verify any permission issues run the following command as grid user
$ dd if=/dev/zero of=/dev/rdsk/emcpower6e count=1440
sh /u01/app/11.2.0/grid/root.sh failed with the message ...see "/u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_node1.log" for details and inside this logfile it failed while creating ASM.
: Failed to stop ASM
: Initial cluster configuration failed.
: Initial cluster configuration failed.
Executing as grid: /u01/app/11.2.0/grid/bin/asmca -silent -diskGroupName DATA -diskList /dev/
rdsk/emcpower6b,/dev/rdsk/emcpower6d,/dev/rdsk/emcpower6e -redundancy EXTERNAL -configureLocalASMTo fix this issue.. first run deconfigure and then run root.sh again.
Here are the steps to fix this.
1. On all the nodes except Last one
Login as root and go to GRID_HOME(/u01/app/11.2.0/grid)/crs/install
#cd /u01/app/11.2.0/grid/crs/install
#./rootcrs.pl -deconfig -force
Parsing the host name
.........
........
Successfully deconfigured Oracle clusterware stack on this node
2. On Last node
#cd /u01/app/11.2.0/grid/crs/install
#./rootcrs.pl -deconfig -force -lastnode
Parsing the host name
....
...
Successfully deconfigured Oracle clusterware stack on this node
$ dd if=/dev/zero of=/dev/rdsk/emcpower6e count=1440
1440 records in
1440 records out
3. Now run root.sh again.
#sh /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
...
...
ASM created and started successfully.
DiskGroup DATA created successfully.
...
CRS-2672: Attempting to start 'ora.DATA.dg' on 'node1'
CRS-2676: Start of 'ora.DATA.dg' on 'node1' succeeded