Add LUNs

From Phospher
Jump to: navigation, search

Add LUNs

Assumptions: You understand what SAN, LUN, LVM, HBA and Linux are/mean.

First: you must have PowerPath 5.01 or newer (for RHEL5) and the Naviagent installed first! Confirm SAN connectivity:

powermt display dev=all
Pseudo name=emcpowera
CLARiiON ID=APM00074400171 [SG-cerebellum]
Logical device ID=6006016086901E00980CFA234A9CDC11 [LUN 41]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
   0 lpfc                      sda       SP A5     active  alive      0      0
   0 lpfc                      sdb       SP B5     active  alive      0      0
   1 lpfc                      sdc       SP A4     active  alive      0      0
   1 lpfc                      sdd       SP B4     active  alive      0      0


You can download these packages from our internal yum repositories.


Second: Add LUNs to your storage group using the Navisphere console.


Third: get Linux to recognize the newly attached storage:

 echo "- - -" > /sys/class/scsi_host/host0/scan
 echo "- - -" > /sys/class/scsi_host/host1/scan


Fourth: restart the naviagent service and reload the HBA configuration. Note- this does not interrupt production.

   service naviagent restart
   powermt config

You should now see new /dev/emcpower* devices.


Fifth: If the new storage will be used in LVM:

   pvcreate /dev/emcpower*


That's it! Be sure to follow the instructions I've written when creating/resizing Logical Volumes and Groups.

How to- Creating and Resizing LVM


Boot from SAN

First: Your boot LUN MUST be the first LUN created in your storage group. The HostID of this LUN must be 0. I am using a 73GB LUN for BOOT. Once the switch ZONE and LUN are created be sure to configure the HBAs BIOS to boot from your newly created LUN. I'm using two HP StorageWorks FC2142SR 4GB PCI-e HBAs. Hit CTRL-E during boot; you'll see the on-screen display.

Second: Install your OS like you would with internal disks. For Windows be sure to use the HP SmartStart disk; it automatically installs the HBA drivers so you don't have to use the "F6" method. Be sure to install your /boot and / in LVM.

Third: For Linux, once the OS is installed (I chose RHEL5) you must install PowerPath 5.0.1 (RHEL5) and the NaviAgent. Confirm the Navisphere console has IP (out of band) connectivity back to your host.

Once PowerPath and the NaviAgent are installed, start the services:

  /etc/init.d/PowerPath start
  /etc/init.d/naviagent start


Fourth: You must now configure LVM to filter out everything except your boot devices during the boot process:

Edit your /etc/lvm/lvm.conf and replace your disk filter with the following: (Note this is for RHEL5; if you're running another OS you must figure this line out for yourself.)

  filter = [ "a/sda[1-2]$/", "r/emcpowera2/", "r/sd.*/", "r/disk.*/", "a/.*/" ]

This will only allow sda1 and sda2 during the boot process.


Fifth: rebuild the LVM cache:

  vgscan -v

Sixth: Make sure it works: only sda1 and sda2 should show.

[root@cerebellum /]# lvmdiskscan
 /dev/ram0                [       16.00 MB]
 /dev/emcpowera           [       73.00 GB]
 /dev/root                [       70.94 GB]
 /dev/ram                 [       16.00 MB]
 /dev/sda1                [      101.94 MB]
 /dev/emcpowera1          [      101.94 MB]
 /dev/VolGroup00/LogVol01 [        1.94 GB]
 /dev/ram2                [       16.00 MB]
 /dev/sda2                [       72.90 GB] LVM physical volume
 /dev/VolGroup01/LVData01 [      105.00 GB]
 /dev/ram3                [       16.00 MB]
 /dev/ram4                [       16.00 MB]
 /dev/ram5                [       16.00 MB]
 /dev/ram6                [       16.00 MB]
 /dev/ram7                [       16.00 MB]
 /dev/ram8                [       16.00 MB]
 /dev/ram9                [       16.00 MB]
 /dev/ram10               [       16.00 MB]
 /dev/ram11               [       16.00 MB]
 /dev/ram12               [       16.00 MB]
 /dev/ram13               [       16.00 MB]
 /dev/ram14               [       16.00 MB]
 /dev/ram15               [       16.00 MB]
 /dev/emcpowerb           [       50.00 GB] LVM physical volume
 /dev/emcpowerc           [       50.00 GB] LVM physical volume
 /dev/emcpowerd           [       50.00 GB] LVM physical volume
 /dev/emcpowere           [       50.00 GB] LVM physical volume
 3 disks
 19 partitions
 4 LVM physical volume whole disks
 1 LVM physical volume
[root@cerebellum /]#

Seventh: rebuild your initrd image to reflect the changes made to /etc/lvm/lvm.conf:

(Note I'm running kernel 2.6.18-8. You will need to adjust your version in the command.)

 mkinitrd initrd-2.6.18-8.emc.img 2.6.18-8.el5


Eighth: boot the new initrd image; modify your grub.conf.

  vi /boot/grub/grub.conf

Adjust grub.conf to boot your new image.


Ninth: reboot and confirm: only sda1 and sda2 should show.

[root@cerebellum /]# lvmdiskscan
 /dev/ram0                [       16.00 MB]
 /dev/emcpowera           [       73.00 GB]
 /dev/root                [       70.94 GB]
 /dev/ram                 [       16.00 MB]
 /dev/sda1                [      101.94 MB]
 /dev/emcpowera1          [      101.94 MB]
 /dev/VolGroup00/LogVol01 [        1.94 GB]
 /dev/ram2                [       16.00 MB]
 /dev/sda2                [       72.90 GB] LVM physical volume
 /dev/VolGroup01/LVData01 [      105.00 GB]
 /dev/ram3                [       16.00 MB]
 /dev/ram4                [       16.00 MB]
 /dev/ram5                [       16.00 MB]
 /dev/ram6                [       16.00 MB]
 /dev/ram7                [       16.00 MB]
 /dev/ram8                [       16.00 MB]
 /dev/ram9                [       16.00 MB]
 /dev/ram10               [       16.00 MB]
 /dev/ram11               [       16.00 MB]
 /dev/ram12               [       16.00 MB]
 /dev/ram13               [       16.00 MB]
 /dev/ram14               [       16.00 MB]
 /dev/ram15               [       16.00 MB]
 /dev/emcpowerb           [       50.00 GB] LVM physical volume
 /dev/emcpowerc           [       50.00 GB] LVM physical volume
 /dev/emcpowerd           [       50.00 GB] LVM physical volume
 /dev/emcpowere           [       50.00 GB] LVM physical volume
 3 disks
 19 partitions
 4 LVM physical volume whole disks
 1 LVM physical volume
[root@cerebellum /]#


Troubleshooting

I used the EMC PDF (300-004-438) for some of the LVM configurations.

Feel free to contact me: brian at phospher dot com