A word about maintenance: This NAS/SAN system EOL'ed in February of this year. Now the data I'm hosting on here will not be very important - backups, temporary data, test VMs, etc, but if a disk or other component fails (and it will), it would be more than nice to get the item replaced in a timely manner. Enter third-party maintenance. The Celerras were already under third-party maintenance, I simply worked with the existing vendor to convert the asset from Celerra to CLARiiON. This also had the unexpected benefit of further reducing the cost of the maintenance contract.
The following sections document the process and procedures I used to re-purpose the CLARiiON back-end of two Celerras, giving a little more life to what would have otherwise been sent to the scrap heap.
Connect both SPs to Ethernet switch.
Now the remaining steps can be performed remotely.
The following sections document the process and procedures I used to re-purpose the CLARiiON back-end of two Celerras, giving a little more life to what would have otherwise been sent to the scrap heap.
Shutdown and Cabling
- Power off Celerra NAS head (not CLARiiON disk shelves or Service Processor bay)
- Disconnect Ethernet and fiber cables from SP bay.
Re-IP The Service Processors
- Connect a laptop to the Ethernet port on SPA
- Change laptop's IP to 128.221.252.111/255.255.255.0
- Browse to 128.221.252.200 (SPA)
- Logon using nasadmin/nasadmin
- Drill down to SPA, right-click and choose properties
- Under the Network tab, change IP and SP Network Name.
- Unplug the Ethernet cable from SPA an connect to SPB.
Connect both SPs to Ethernet switch.
User Management
I recommend setting up an administrative user to coexist with nasadmin. There is also an existing "admin" user that I would just leave there. I create an account named "Administrator" with a unique password.- Go to the Tools menu and select Security\User Management\Add.
Clean-up Existing LUN
There will likely be many LUNs that were configure for and consumed by the Celerra NAS part of the system. I simply deleted all of these, event the DART/OS LUNs and RAID Groups. I did not have any hosts defined/registered.
Enable Access Logix
- Right-click on the CLARiiON (serial number) and choose Properties
- In the Storage Access tab, in the Data Access pane, check "Access Control Enabled"
Disk Layout/Hot Spares
Next I generate a disk layout report to see what disks are assigned as hot spares. Go to the Reporting node and generate a Configuration\Available Storage report. Check to see where the current hot spare disks are located. I typically like to have these at the end of the disk cabinet/bus enclosure. Move the spare(s) to the appropriate drive(s) as needed.The last EMC recommendation I'm aware of is 1 hot spare per 30 disks.
SAN Fabric Configuration
Time to connect the CLARiiON to your fibre fabric.- Each SP should have 2 ports. Connect one port to each fabric.
- Both SPs should be connected to both fabrics.
- Create an alias for each port/WWN in your SAN switch.
- Create the zones in your SAN switch to allow the hosts to "see" the CLARiiON.
- Save and enable the new configuration.
Host and LUN Creation, Masking
Now that the CLARiiON can see the hosts, its time to register the hosts with the CLARiiON:- Right-click on the CLARiiON (the serial number) and choose Connectivity Status
- Note that you will need to know the WWNs of each FC port of your hosts. All of my hosts are ESXi servers, so the following instructions will be for these types of hosts.
- Highlight the Initiator Name and click Register
- The Initiator Type should be "CLARiiON Open"
- Enter the HBA information
- Enter the Host information
- When finished click OK
- Click the Refresh button and you should see the host name appear in the "Server Name" column
- Complete the remaining initiators.
- Create a RAID Group -
- Right-click on the RAID Groups node and choose 'Create RAID Group'
- RAID Group ID: leave the default value ("0" for the first one)
- Number of disks: this is the stuff that starts religous wars. To keep it simple I use groups of 5 using RAID5.
- RAID Type: RAID 5
- Disk Selection: you could leave the default of "Automatic" and Navisphere will choose for you. However, I've never liked it's choices, so set it to manual and choose the disks that make the most sense (for example, the first 5 disks in enclosure x).
- Create the LUN -
- Right-click on the RAID Group you just created and choose "Bind LUN"
- RAID Type - default should match the RAID Group setting
- RAID Group - should be the same ID as the one you selected
- Rebuild Priority - leave default
- Verify Priority - leave default
- Default Owner - Choose Auto
- LUN Size - Choose MAX from the drop-down list
Connecting It All Together
- Create the Storage Group -
- Right-click on Storage Groups and choose "Create Storage Group"
- I used the name of my cluster since this storage will be shared among all hosts in that cluster
- Add hosts to the storage group -
- Right-click on the storage group name and choose "Connect Hosts"
- Select all of the hosts that should be a member of this group and move them to the right-hand pane
- Click OK
- Add LUNs to the Storage Group -
- Right-click on the storage group and choose "Select LUNs"
- Expand out the appropriate component and select the LUN(s)
- Click OK
Other Notes
- I recommend documenting the config as your setting this up. This includes network names, IPs and SP WWNs which can be found at:
Storage Domains\LocalDomain\[SerialNo]\Physical\SPs\SPA(B)\Ports
(It's the second half of each WWN) - Don't forget to recan your ESXi hosts and check to make sure the paths are using RoundRobin, VMW_SATP_CX and IOPS=1
- I would also generate a new Available Storage report after each RAID Group/LUN creation event. I save these to an Excel spreadsheet for future reference
- Use a tool such as Solarwinds Storage Manager to monitor SAN usage and availability.