NFS is great technology when it comes to VMware - thin provisioning by default, no HBAs to mess with, dynamic storage allocation, etc.... however there is one pitfall that I've just noticed on our test lab.
With HBAs, when you ran out of space in a presented LUN, only VMs that had snapshots attached would crash, because they were writing a redo log, and no space was available for that.
Now, with NFS and thin-provisioned VMs (and even thin-provisioned VMs on VMFS) all the VMs have redo logs essentially, so if you run out of space on a volume, all the VMs in that volume are going to crash (which could be a large number fyi).
Thankfully, dynamic allocation of NFS volumes should prevent this, given sufficient warning on your storage.
Wednesday, November 4, 2009
Tuesday, September 8, 2009
VMworld 2009
Just got back from VMworld 2009 in San Francisco. Conference was great - good sessions, great labs (although I hear they had some trouble on Monday) and good people. Overall a great value, I'll be going again next year.
Max.
Max.
Wednesday, May 6, 2009
VMware storage migration script
We're moving from SAN attached to NFS storage on our vmware environment, (and vSphere4 isn't out yet) so I wrote a script that will migrate VMs from current location to the new one (I had to hard-code the new location in, since when using sed the path with vmfs/ messed up sed due to the s/).
Warning: if you have VMs with 2 disks with the same name (if they were created in different directories) then this won't work. It will work for up to 3 disks, and should be easy to make support more.
#!/bin/bash
HOMEDIR=/vmfs/volumes/d42ddbc9-93beeb73
for VMNAME in `vmware-cmd -l | awk -F/ '{print $5}'`
do
mkdir $HOMEDIR/$VMNAME/
VMPATH=`vmware-cmd -l | grep $VMNAME | awk -F. '{print $1}'`
for VMDK in `grep scsi0:.*fileName $VMPATH.vmx | awk -F\" '{print $2}'`
do
vmkfstools -i $VMPATH.vmdk $HOMEDIR/$VMNAME/$VMDK -d 'thin' -a lsilogic
done
vmware-cmd -s unregister $VMPATH.vmx
sleep 2
cp $VMPATH.vmx $HOMEDIR/$VMNAME/$VMNAME.vmx.new
sed "s/scsi0:0.fileName.*/scsi0:0.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new > $HOMEDIR/$VMNAME/$VMNAME.vmx.new2
sed "s/scsi0:1.fileName.*/scsi0:1.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}_1.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new2 > $HOMEDIR/$VMNAME/$VMNAME.vmx.new3
sed "s/scsi0:2.fileName.*/scsi0:2.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}_2.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new3 > $HOMEDIR/$VMNAME/$VMNAME.vmx
rm -rf $HOMEDIR/$VMNAME/*.new*
chmod +x $HOMEDIR/$VMNAME/$VMNAME.vmx
vmware-cmd -s register $HOMEDIR/$VMNAME/$VMNAME.vmx
done
Warning: if you have VMs with 2 disks with the same name (if they were created in different directories) then this won't work. It will work for up to 3 disks, and should be easy to make support more.
#!/bin/bash
HOMEDIR=/vmfs/volumes/d42ddbc9-93beeb73
for VMNAME in `vmware-cmd -l | awk -F/ '{print $5}'`
do
mkdir $HOMEDIR/$VMNAME/
VMPATH=`vmware-cmd -l | grep $VMNAME | awk -F. '{print $1}'`
for VMDK in `grep scsi0:.*fileName $VMPATH.vmx | awk -F\" '{print $2}'`
do
vmkfstools -i $VMPATH.vmdk $HOMEDIR/$VMNAME/$VMDK -d 'thin' -a lsilogic
done
vmware-cmd -s unregister $VMPATH.vmx
sleep 2
cp $VMPATH.vmx $HOMEDIR/$VMNAME/$VMNAME.vmx.new
sed "s/scsi0:0.fileName.*/scsi0:0.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new > $HOMEDIR/$VMNAME/$VMNAME.vmx.new2
sed "s/scsi0:1.fileName.*/scsi0:1.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}_1.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new2 > $HOMEDIR/$VMNAME/$VMNAME.vmx.new3
sed "s/scsi0:2.fileName.*/scsi0:2.fileName = \/vmfs\/volumes\/d42ddbc9-93beeb73\/${VMNAME}\/${VMNAME}_2.vmdk/" $HOMEDIR/$VMNAME/$VMNAME.vmx.new3 > $HOMEDIR/$VMNAME/$VMNAME.vmx
rm -rf $HOMEDIR/$VMNAME/*.new*
chmod +x $HOMEDIR/$VMNAME/$VMNAME.vmx
vmware-cmd -s register $HOMEDIR/$VMNAME/$VMNAME.vmx
done
Monday, February 9, 2009
HP Onboard Administrator LDAP authentication search context issue
This was an annoying problem today, so I'm posting it in case it helps anyone else.
I was trying to get the OA for our blades connecting to our AD. I setup the LDAP like the manual said, but no joy. Some research on the HP forums said that if the user you want to connect as is in a different OU than the group they're a member of, both need to be configured as search contexts in the OA config. My config page looks like this:
Directory Server Address: dc.domain.com
Directory Server SSL Port: 636
Search Context 1: OU=AdminGroups,OU=Admin,DC=domain,DC=com
Search Context 2: OU=Admins,OU=Admin,DC=domain,DC=com
And then the group setup is like so:
CN=ILO-Admin,OU=AdminGroups,OU=Admin,DC=domain,DC=com
So the group above is listed in the search context 1, but my admin account is in a different OU, which is search context 2. Bah.
I was trying to get the OA for our blades connecting to our AD. I setup the LDAP like the manual said, but no joy. Some research on the HP forums said that if the user you want to connect as is in a different OU than the group they're a member of, both need to be configured as search contexts in the OA config. My config page looks like this:
Directory Server Address: dc.domain.com
Directory Server SSL Port: 636
Search Context 1: OU=AdminGroups,OU=Admin,DC=domain,DC=com
Search Context 2: OU=Admins,OU=Admin,DC=domain,DC=com
And then the group setup is like so:
CN=ILO-Admin,OU=AdminGroups,OU=Admin,DC=domain,DC=com
So the group above is listed in the search context 1, but my admin account is in a different OU, which is search context 2. Bah.
Thursday, January 29, 2009
Script to locate incorrect path to disk on Netapp
One of the storage guys sent me an email indicating some of the disk paths were incorrect on some of our newer servers - specifically they were sending data over partner paths to disk (we have 2 heads in a cluster on that filer, so unless its in failover mode, its less efficient to send data via the partner path to disk).
So I wrote a quick script to check the paths on all hosts against the list of erroring paths he sent me:
rm -rf /root/lunerrors.txt
touch /root/lunerrors.txt
for host in `cat horhosts.txt`
do
echo $host > lunerrors.txt
for path in `cat errorpaths.txt`
do
ssh $host /usr/sbin/esxcfg-mpath -l | grep $path > lunerrors.txt
done
done
So I wrote a quick script to check the paths on all hosts against the list of erroring paths he sent me:
rm -rf /root/lunerrors.txt
touch /root/lunerrors.txt
for host in `cat horhosts.txt`
do
echo $host > lunerrors.txt
for path in `cat errorpaths.txt`
do
ssh $host /usr/sbin/esxcfg-mpath -l | grep $path > lunerrors.txt
done
done
Tuesday, January 20, 2009
Mac address conflicts
So it turns out some VMs have been assigned static MAC addresses already, so my script duplicated some MAC addresses in use. Not a huge deal (the network team bitched about some errors) until the following happened:
The SCCM guys assigned an image package via MAC address to those MAC addresses, and didn't restrict it to XP images. So the SCCM box re-imaged some windows 2003 boxes. Thank goodness we have vRanger in production, so the VMs were restored pretty quickly (save one, which errored out, and Vizioncore gave us a beta version which restored it).
I modified the script to use a range outside the first 256 addresses. Make sure you aren't conflicting if you use it, and make sure your imaging guys don't leave the package wide-open like ours did :).
The SCCM guys assigned an image package via MAC address to those MAC addresses, and didn't restrict it to XP images. So the SCCM box re-imaged some windows 2003 boxes. Thank goodness we have vRanger in production, so the VMs were restored pretty quickly (save one, which errored out, and Vizioncore gave us a beta version which restored it).
I modified the script to use a range outside the first 256 addresses. Make sure you aren't conflicting if you use it, and make sure your imaging guys don't leave the package wide-open like ours did :).
Subscribe to:
Posts (Atom)