ESXi machine migration

So the great day arrived, new hardware justification time.

I can admit I am a gear-aholic, it’s the first step to recovery but so far I am still having trouble admitting my life is unmanagable as a result. That said, my accountant and better half tends to be very suspicious when I saw we need a new … 

I have to thank  Windows backup for the project charter,  while trying to get the monthly copy of sysstate — yes you really should backup your domain controller once in a while — it failed. Re-run, fail again and again, so msdn and google come to the agreement that I don’t have neough memory on the machine to complete the volume shadow copy that preceeds the backup.

Unfortunately, the VMware ESXi host had already been hacked to reduce the host OS use of memory and CPU just to squeeze on the two Linux boxes and Windows 2003 domain controller. It was my first venture into ESXi and it was built on an old AMD box with a whopping 1 gig of ram and 1.4 GHZ cpu. Running three production devices, on a single machine for over a year, and getting my feet wet with ESXi, Manjula-1 has fulfilled her mission and is ready to retire.

Basic hardware for a test lab is not that hard to find, but once you need to get into dual and quad core, the second hand computer shops still want some money. I did manage to track down a Dual Core Dell box with 2 gigs of ram, hardly a power house but twice as powerful, 64 bit instead of 32 and the price was right.

Plan 1:

Take the Sata drive from Manjula-1 and put it into Manjula-2.  The drive and dual nic transplant was easy enough but the hypervisor would freeze at the start of the config script.  

Plan 2:  Build a brand new ESXi hypervisor from media and move the machines.  The installation itself was far less dramatic than getting my quad core server up an running with a 9550 raid card was a few months back.  With supported hardware ESXi 4 installs pretty smoothly and leaves you most of the hard drive free for host systems. 

The lab VMware environement actually uses an opensource SAN running ISCSI which came in very hand when moving the virtual machine guests.

Since we don’t have VMotion or some other replication/migration software with the free tools the simplest way seemed to be simply exporting the guests as virtual appliances, and then reimport them on the new server.

It’s pretty simple to export a virtual machine using the VSphere console, the only only dawback is the machine must be shut down when doing this. A small server is no problem, it fits on USB drive and you can export a 10-20 gig server in about 20 minutes or less. 

The big server volumes could take days to export so this is where having data volumes on SAN came in handy. 

  • Shut down the guest server
  • Edit the device settings
  • Remove the hard disk,  ( MAKE SURE YOU DON’T HAVE DELETE FILES CHECKED)
  • Select the server from the guest inventory, file/virtual appliance/export.
  • Browse to a file storage location like  a USB hard drive and go.

Disconnecting the SAN volume

Importing the machine is pretty much the inverse, the primary deifference is you import from an OVF template. The good news is, you actaully made one during the export process :-).

Importing a guest machine

Using a USB drive to store the guest appliances saved on network bandwidth and you have the added benfit of cold backups that could be used in a bare metal server restore.

Once the guest server was restored it was simply a matter of  editing the settings to add the SAN volumes back in and powering it back on to validate everything came over ok. 

Reconnecting the SAN volumes was a little trickier but as long as you watch for the following errors and modify acordingly it should be fine. It goes without saying that good doumentation of where everything is located is very important. Adding and reattaching the SAN volumes in the wrong order may corrupt data or an application.

Plan 2.5:  Thanks goodness for DR. Some how the move from the server staging area to the  permenet rack annoyed the VMware gods, or the cheap used computer gods, or maybe both of them.  The VMware host machine was suddenly firing errors in the log, unable to see certain files etc. Since Both DNS servers were guests on the same host so without a lot of other changes the VMware communities support forum wasn’t visible.

A reimage of the VMware host and re-restoring the guests took about 40 minutes.  Joining the devices to the SAN, about two minutes because the documentation and LUN pathing was checked several times.  

Because the VMware host was brand new I didn’t have a host backup that tells VMware where the guest machine files are. The odds of failing between reboots are low, but it happens.  That said, 4 critical system servers completely restored and running in 40 minutes from bare metal is pretty great.  With the proper backups it would have been ten.

Here is a great link for a basic command line back up to start with: http://vmwaretips.com/wp/2009/10/12/before-host-profiles-there-was-vicfg-cfgbackuppl/

The lazy sysadmin voice is saying “we can script that” ,  so watch for an automated backup script or two. There are a few options out there, so a little lab work to figure out the simplest is in order.  For now, copies on he sysadmin laptop and in the enterprise system backup folder wil have to do.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top