Making all other blogs seem exciting!
RSS icon Home icon
  • FLARE 30 is out in the wild!

    Posted on September 3rd, 2010 ashinn No comments

    If you didn’t already know, after a few delays, FLARE 30 is now available for download on PowerLink. I’ve been playing with Unisphere a bit, and it seems like an improvement over ole Navisphere. However, it’ll take awhile to shed those old Navi ways. At the very least its a solid base to start fresh on.

    If I have any disappointment…. it would be how much DIDN’T change in Unisphere. In particular I was disappointed to see very little was done to improve analyzer. Lets hope thats something they build upon in later versions. Perhaps I was just expecting too much for a v1.0 release. Don’t get me wrong, I STRONGLY SUPPORT EMC revamping the midrange administraton tools.

    One small thing is Unisphere finally shows you what failover mode a host is at in connection manager. I realize it was no big deal to find out, but its one less proceedure to do. They also put the name of the mode along with the number (IE 4 = ALUA and so on).

    Also the way snapshots and clones are presented/organized is a pretty big step forward. I could go on and on, just download Unisphere and play around.

    I can’t tell you how it looks with the Celerra yet since none of my Celerra’s are on DART 6. If you try to administer a DART 5.x box, it launches a browser window to the control station (how nice of it!).

    I’m currently trying to figure out if a Celerra running DART 5.6.49 will work with FLARE 30 or not. I have a NS-960 basically not doing anything (who doesn’t?!), but I don’t want to brick the DART installation if I can avoid it! If/when I can get a straight answer out of EMC I’ll update this post.

    I really want to put FAST Cache and all VAAI through the paces before unleashing this day 1 code onto my poor customers & users. Hopefully I’ll get a thumbs up/down by this weekend from support and I can start doing some IO testing and report back.

    Of note I was told by an EMC TC that FLARE 30 = RecoverPoint upgrade, but I’ve yet to find a document to back that up. Just incase I bounced this CX/NS-960 out of the available RP splitters for now.

  • EMC CLARiiON CX Disk Offset Configuration

    Posted on March 17th, 2010 ashinn No comments

    I’ll be updating this with various OS method of setting the disk offset. This is mostly for me to consolidate my notes. It should be noted this is valid for MOST current EMC disk technologies, but you should always consult the documentation to make sure.

    If anyone has an OS to add, or sees an error let me know.

    Microsoft Windows Server 2003: 

    1. Start -> Run -> cmd.exe
    2. diskpart.exe
    3. List disk and find the new LUN you’re wanting to offset by number
    4. select disk #, where # = the LUN you wanted in step 3.
    5. create partition primary align=X, where X = 32, 64 or 128 (in my case, 64).
    6. format the disk in disk manager/assign a letter/use a mount point.

    Microsoft Windows Server 2008:

    Technically this is no longer required. This is because Server 2008 automatically sets the offset to 1MB on partition creation.

    Linux / older (2.x) ESX / etc:

    1. On service console, execute “fdisk /dev/sdX” (or “fdisk /dev/emcpowerX” for clariion systems), where X is the device on which you would like to create the new partition (a, b, c, etc).
    2. Type “n” to create a new partition
    3. Type “p” to create a primary partition
    4. Type “1” to create partition #1
    5. Select the defaults to use the full disk.
    6. Type “t” to change partition type
    7. Type “1” to select partition #1
    8. Depending on your Linux environment and need: type “83” to set type to Linux partition, or type “82” to set type to Linux swap, or type “8e” to set type to Linux LVM, or type “fb” to set type to VMFS (vmware file system). For other partition types, type “L” to display the list of codes.
    9. Type “x” to get into the expert mode
    10. Type “b” to specify the starting block of partitions
    11. Type “1” to select partition #1
    12. Type “128” to make partition to align on 64KB boundary (block No. 128)
    13. Type “w” to write new partition information to disk.
    14. Exit fdisk and format the partition with your favorie filesystem.

    Solaris:

    To be added.

  • Making Exchange 2007 perform on ESX.

    Posted on March 11th, 2010 ashinn No comments

    For a couple years we’ve maintained a full lab environment of our production hosted Microsoft Exchange 2007 CCR cluster. I have to be honest, and the Exchange administrator would agree, its never really performed that well. Finally the Exchange admin pretty much got ticked at the performance the other day and while he was out on vacation I thought I’d see what I could do. Its now performing about 100x the speed it ever did, and when he comes back Monday I hope he’s happy. More than anything we’ve just not had the time to really dig into the issue(s).

    Needless to say over these couple years we’ve all learned quite a bit about how to eek more performance out of ESX, and in particular Exchange on ESX. I thought I’d share a bunch of the concepts and tidbits in one spot I used to arrive at better performance. The old tricks of throwing RAM and vCPU’s at the problem just didn’t cut it.

    Its worth noting that people these days might not choose to use CCR on a virtualized environment (or never did), however I feel these concepts bleed over into stand-alone or maybe even FT/vLockStep implementations going forward.

    First and foremost, I invite you to read this article on Exchange 2007’s memory managment strategy:

    http://msexchangeteam.com/archive/2008/08/06/449484.aspx

    Okay, now that you’ve read that lets continue. Suffice it to say Exchange literally grabs every piece of memory and page it can…. if you let it (which most people do for cache/performance reasons).

    As most of you reading know, ESX has quite a few tricks up its sleeve in the memory management department itself and I invite you to read about those concepts in the vSphere/ESX manuals. Obviously the VM’s have access to physical RAM, shares RAM pages when possible, begs/borrows/steals from other VM’s (balloon) and when necessary swaps to disk as a last resort. After careful examination of the performance logs of the Exchange VM’s, it became very obvious ESX was swapping.

    Now, we happen to have an entire lab cluster and nobody really cares about performance … well nobody except the persnickity Exchange admin anyway. After doing some research I came to the conclusion I didn’t want the Exchange servers to swap memory … period. I then set a reservation on the VM to the exact same size as the RAM I’d granted. In this case 3GB. This effectivly disables the vswap since the host has no choice but to ante up.  Doing  just this provided an incredible performance boost to the Exchange cluster, but I scratched a little deeper.

    Within the VM itself I observed that it was paging quite a bit, and as the above referenced article shows… it always will. So to extract maximum paging performance I decided to create a couple LUN’s and mapped them raw (RDM) to the Exchange servers. I then did some research and came to the conclusion 4kb was the optimal block size for a raw paging volume and if anyone has differing opinions on that PLEASE post them. I then created page files equal to granted memory +20MB. After doing all of that and rebooting, I could tell we’re really cooking with gas now.

    Going forward I would like to talk to the Exchange adminstrator about migrating his VMDK based message stores to RDM as well.

    So, in closing … hopefully some of these ideas will help you come up with your own Exchange performance issue resolution. I’m sure there are more tricks I need to find, but right now I’m pretty happy with the results.

    Till next time…

  • ESX, Linux LVM, file system expansion and you.

    Posted on December 9th, 2009 ashinn No comments

    I see quite a few methods being used out there for logicial volume expansion. Everything from what I’m about to post all the way down to folks using cloning tools like Ghost and Clonezilla. Heres how I do it, and it works good for me. Note that I typically do these offline, for no other reason than I’ve biffed enough servers in my life to not be in a rush. If you’re needing 5-9’s uptime, look for another blog. It seems more than anything I have to expand / and so that also can play into this.

    Likely a good idea to visit the LVM man pages and/or this site if your LVM is a bit weak: read this and maybe also this.

    1. Shutdown the VM.
    2. Extend the VMDK you’re working with. You have a couple choices here. You can either extend it from the service console like so: vmkfstools -X 5G /path/to/the/vmdk (would add 5GB to that VMDK) or you can also extend it in the Virtual Infrastructure Client. If you use the service console, look at the options for vmkfstools for the exact syntax for your case.
    3. I think it goes without saying you should have backups and/or snapshots of the VM.
    4. Boot the LiveCD for your particular flavor of Linux. There are LVM LiveCD’s out there too, and also I bet a Knoppix disc might even work if in a pinch. I happen to run SLES & openSUSE pretty much exclusivly and the LiveCD’s work pretty well.
    5. Use fdisk and partition your new found space properly. Make sure you change the type to LVM (8e). If you’re not savvy with fdisk, you should likely stop now.
    6. So lets say the new partition you created is /dev/sda5. You now need to create a new physical volume, use the command: pvcreate /dev/sda5 and with any luck it’ll complete.
    7. Now you need to extend the volume group to the physical volume you just added. Use the command: vgextend *volumenamehere* /dev/sda5. Use vgdisplay if you don’t know which volume group you need to extend off the top of your head.
    8. Finally, its time to extend the logical volume itself. Use this command: lvextend -L 5GB /your/logical/volume/here. Use lvdisplay if you don’t know which LV you want to extend by heart. In this example I added 5GB to the volume. If you wanted to consume all space on the VG, you’d use something like this: lvextend -l +100%FREE /your/logical/volume/here
    9. At this point, if everything worked, you can expand the file system. On the systems I administrate we typically use XFS these days. XFS requires the filesystem to be online to grow. So while in the LiveCD you can quickly mount say /dev/system/root to /mnt and then execute xfs_growfs /mnt. Research the proper way to grow whatever file system YOU happen to use.
    10. Reboot the VM and make sure all is well.

    Till next time…

  • Dynamically rescan LUN’s on SLES

    Posted on February 24th, 2009 ashinn No comments

    I’m sure this is kids play to most people, but I’ve just not done much SAN work with Linux.

    We bought a new EMC CLARiiON CX4-240, and I was just tossing it random LUN’s to do speed & HA tests. When I added a LUN I’d see it in powermt, and I didn’t know what the equivalent of devfsadm was in Linux … SLES to be specific. What can I say, I worked with Solaris way too long.

    The first step is to run: powermt display

    # powermt display
    CLARiiON logical device count=3
    ================================================
    —– Host Bus Adapters ——— —— I/O Paths —– —— Stats ——
    ### HW Path Summary Total Dead IO/Sec Q-IOs Errors
    ================================================
    3 qla2xxx optimal 6 0 – 0 0
    4 qla2xxx optimal 6 0 – 0 0

    Note the 3/4 preceding the HBA’s.

    Now, execute this:

    # echo “- – -” > /sys/class/scsi_host/host3/scan
    # echo “- – -” > /sys/class/scsi_host/host4/scan

    Its my understand this also works on RHEL and others, but YMMV. This is all buried in the PowerPath manual too, but hopefully I’ve saved someone a bit of time.

    Till next time…