What is the difference between a hot migration and a cold migration?
The migration function is yet another advantage of using virtual machines in Cloud computing systems. There are two types of migration function: hot and cold.
Hot migration is the transfer of the OS and applications from virtual machines to physical machines without stopping OS operations or applications. In a highly demanding environment such as a public cloud, with even the best servers, the risk of failure starts to rise after around 3 years. Our hot migration function easily avoids downtime caused by failure and maintenance issues with physical machines. Hot migration fulfills several needs:
• Frees up a given physical server for maintenance without downtime for users;
• Dynamically balances workloads among physical servers so that they all run at optimal levels;
• Prevents a facility’s under-performing servers (or those on the brink of failure) from interrupting operations.
Cold migration, meanwhile, suspends OS and applications on virtual machines before transferring them to physical machines. Types of migration available depend on the hypervisor selected. With cold migration, you have the option of moving the associated disks from one datastore to another. The virtual machines are not required to be on shared storage. The virtual machine you want to migrate must be powered off prior to beginning the cold migration process.
Error while enabling FT when vm was powered off. Secondary vm got created but when i tried to power on vm i got the following error message:
“A general system error occurred : The product version of the destination host does not support one or more CPU features in use by the virtual machine. Such features from CPUID level 0×8000001 register ‘edx’.
This issue is caused by the Nx flag setting.
There are 2 options to correct this:
1. In VMware Infrastructure (VI) Client, select the virtual machine from the Inventory. Click Edit Settings > Options >Advanced > Hide the Nx flag from the guest. Note: The virtual machine needs to be powered off for the change to take effect.
2. Check /proc/cpuinfo on both hosts and verify the Flags column is identical. In this case “nx” was missing. Go into the BIOS of each host and enable the setting called no-execute memory protection on both hosts or turn this off on both hosts.
I’m having an issue with deploying a W2K8 R2 server from a customized template.
I start out by building a W2K8 R2 VM from scratch, install tools, and install patches. Once the VM was shutdown I converted it to a template. vCenter 4.1, esx host are 4.1
When deploying without customizations I have no issues. When I use customization I get the following message…..
3.
4. “autochk program not found – skipping AUTOCHECK”
5.
6. ….then BSOD
7.
8. “STOP: C000021a Fatal System Error
9. The Session Manager Initialization system process terminated unexpectedly with a status of 0xc000003a”
10.
11. and the process repeats.
Solution – You need to boot the VM off the windows disk and choose repair and command prompt. You will probably have to get into the VM bios to change the boot order to boot of the CD.
I changed the IP of my vCenter Server and now all ESX hosts are showing as disconnected from VMware Infrastructure/vSphere Client. How can I resolve this?
The ESX hosts disconnect because they store the IP address of the vCenter Server in configuration files on each of the individual servers. This incorrect address continues to be used for heartbeat packets to vCenter Server.
There are two methods to get the ESX hosts connected again. Try each one in order, testing your results each time.
Method 1
1. Log in as root to the ESX host with an SSH client.
2. Using a text editor, edit the /etc/opt/vmware/vpxa/vpxa.cfg file and change the <serverIp> parameter to the new IP of the vCenter Server.
3. Save your changes and exit.
4. Restart the management agents on the ESX. .
5. Restart the VirtualCenter Server service
Method 2
From vSphere Client, right-click the ESX host and click Disconnect.
From vSphere Client, right-click the ESX host and click Reconnect. If the IP is still not correct, go to step 3.
From vSphere Client, right-click the ESX host and click Remove. Caution: After removing the host from vCenter Server, all the performance data for that host will be lost along with the virtual machines residing on the host.
Reinstall the VMware vCenter Server agent.
Select New > Add Host.
Enter the information used for connecting to the host.
If the IP traffic between the vCenter Server and ESX host is passing through a NAT device like a firewall or router and the vCenter Server’s IP is translated to an external or WAN IP, update the Managed IP address:
From vSphere Client connected to the vCenter Server top menu, click Administration and choose VirtualCenter Management Server Configuration.
Click Runtime Settings from the left panel.
Change the vCenter Server Managed IP address.
If the DNS name of the vCenter Server has changed, update the vCenter Server Name field with the new DNS name.
Powering off a virtual machine fails with the error: Cannot power Off: Another task is already in progress
To resolve this issue:
1. Open the .vmx file of the virtual machine using a text editor.
2. Comment this line: #log.fileName = “/vmfs/volumes/4b8bd18f-c1f89b5a-1914-002219c8e7a3/vmware.log”
3. Restart the virtual machine for the changes to take effect. Note: Instead of restarting the virtual machine, you can reload the .vmx file using these commands: # vmware-vim-cmd vmsvc/getallvms This command returns the VMID. # vmware-vim-cmd vmsvc/reload <VMID>
When attempting to mount a Windows or Samba share on the service console, the following error message is reported:
smbfs: mount_data version 1919251317 is not supported.
Resolution:
ESX 4.0 uses a different syntax for the mount command. Instead of using the smbfs keyword, use the cifs keyword.
The syntax for the command is similar to:
mount -t cifs //192.168.1.100/Share /mnt/share/ -o username=xxxxx
To ensure proper directory permissions are set, apply the two following options:
dir_mode=0755 file_mode=0755
For example:
mount -t cifs //192.168.1.100/Share /mnt/share/ -o username=xxxxx,dir_mode=0755,file_mode=0755
Virtual machine fails with the error: Unrecoverable Memory Allocation Failure
Solution:
This issue occurs due to non-availability of memory for the VMX process, resulting in a virtual machine failure. The vmx process was updated to resolve this issue in ESX/ESXi 4.0 Update 2.
To resolve this issue, upgrade your product installation to ESX/ESXi 4.0 Update 2 or late
Stop Code 0x0000007B (inaccessible boot device) after upgrading a virtual machine’s virtual hardware to version 7
Solution:
This issue is caused by a failure to install the driver for the VMware Virtual disk SCSI Disk Device.
To resolve this issue:
1. Upgrade VMware Tools and reboot to confirm that the upgrade is successful.
2. Take snapshot of the virtual machine as a backup.
3. Power off the virtual machine, upgrade the virtual hardware, and power on the virtual machine.
4. Log in to the virtual machine. The drivers in the Device Manager are not installed automatically. Note: Do not touch the driver install dialogs.
5. Open Explorer and go to C:\Windows\System32\drivers.
6. Change ownership from TrustedInstaller to Administrators for disk.sys and pci.sys, then immediately add Administrators with Full control permissions to the ACL for these files.
7. Keep the Explorer window open and use a new one to navigate to C:\Windows\inf.
8. Use a third Explorer window to navigate to C:\Windows\System32\DriverStore\FileRepository. There is a long list of folders there. You need to be in disk.inf_* and machine.info_*. There may be more than one of these folders . If you do not know what one to choose, take the one with the most recent modified date.
9. From the disk.inf_* folder, copy disk.inf_* to C:\Windows\inf.
10. From the disk.inf_* folder, copy disk.sys to C:\Windows\System32\drivers.
11. From the machine.inf_* folder, copy machine.inf to C:\Windows\inf.
12. From the machine.inf_* folder copy pci.sys to C:\Windows\System32\drivers.
13. Locate and install driver software. Confirm in device management that drivers have been successfully installed.
14. Update the driver for VMware Virtual disk SCSI Disk Device. Browse computer to point to C:\Windows\System32\DriverStore\FileRepository for a search including subfolders.
15. Repeat steps 2-15 for the other devices that are listed as Unknown device in the Device Manager.
16. Reboot the virtual machine.
Deleting a snapshot fails with the error: A general system error occurred: concurrent access
This error occurs if two or more processes are attempting to access a virtual disk at the same time.
This situation of multiple processes attempting to access the virtual disk can be seen:
1. If virtual machine snapshots previously pointed at separate virtual disks are now incorrectly pointed to the same virtual disk. This situation can lead to corrupted data within the guest operating system file structure on the disks with which the snapshots are associated.
To resolve this issue, correct the virtual disk values.
2. If you are attempting to remove snapshots while VMware Data Recovery or VMware Consolidated Backup or a third-party backup solution is attempting to access the virtual disk at the same time.
Ensure that multiple processes are not accessing the virtual disk to resolve this issue.
Note
: Restarting the management agents on the ESX host may also resolve this issue.
Unable to remove the network card associated with the iSCSI VMKernel
This issue may occur if there are active I/O actions with VMKernel. To resolve this issue:
1. Ensure that there are no active sessions using the VMKernel port.
2. Turn off the software iSCSI initiator.
3. Reboot the host.
Pressing Ctrl+Alt+Del in the ESX 4.0 service console, whether the ESX host is in maintenance mode or not, causes:
All virtual machines running on the host to power down.
ESX host to reboot.
Solution:
If cannot immediately patch your ESX host, disable the option to Reboot the host on Ctrl+Alt+Del on the ESX host:
1. Log in to the ESX host via KVM, SSH or by accessing the console directly.
2. Open the file /etc/inittab using a text editor such as vi or nano.
3. Edit /etc/inittab by placing a # symbol in front of the line ca::ctrlaltdel:/sbin/shutdown -t3 -r now so that it reads: # Trap CTRL-ALT-DELETE # ca::ctrlaltdel:/sbin/shutdown -t3 -r now
4. Save the file and exit the text editor.
5. Force the configuration changes to take effect without rebooting the host, by executing init q
The migration function is yet another advantage of using virtual machines in Cloud computing systems. There are two types of migration function: hot and cold.
Hot migration is the transfer of the OS and applications from virtual machines to physical machines without stopping OS operations or applications. In a highly demanding environment such as a public cloud, with even the best servers, the risk of failure starts to rise after around 3 years. Our hot migration function easily avoids downtime caused by failure and maintenance issues with physical machines. Hot migration fulfills several needs:
• Frees up a given physical server for maintenance without downtime for users;
• Dynamically balances workloads among physical servers so that they all run at optimal levels;
• Prevents a facility’s under-performing servers (or those on the brink of failure) from interrupting operations.
Cold migration, meanwhile, suspends OS and applications on virtual machines before transferring them to physical machines. Types of migration available depend on the hypervisor selected. With cold migration, you have the option of moving the associated disks from one datastore to another. The virtual machines are not required to be on shared storage. The virtual machine you want to migrate must be powered off prior to beginning the cold migration process.
Error while enabling FT when vm was powered off. Secondary vm got created but when i tried to power on vm i got the following error message:
“A general system error occurred : The product version of the destination host does not support one or more CPU features in use by the virtual machine. Such features from CPUID level 0×8000001 register ‘edx’.
This issue is caused by the Nx flag setting.
There are 2 options to correct this:
1. In VMware Infrastructure (VI) Client, select the virtual machine from the Inventory. Click Edit Settings > Options >Advanced > Hide the Nx flag from the guest. Note: The virtual machine needs to be powered off for the change to take effect.
2. Check /proc/cpuinfo on both hosts and verify the Flags column is identical. In this case “nx” was missing. Go into the BIOS of each host and enable the setting called no-execute memory protection on both hosts or turn this off on both hosts.
I’m having an issue with deploying a W2K8 R2 server from a customized template.
I start out by building a W2K8 R2 VM from scratch, install tools, and install patches. Once the VM was shutdown I converted it to a template. vCenter 4.1, esx host are 4.1
When deploying without customizations I have no issues. When I use customization I get the following message…..
3.
4. “autochk program not found – skipping AUTOCHECK”
5.
6. ….then BSOD
7.
8. “STOP: C000021a Fatal System Error
9. The Session Manager Initialization system process terminated unexpectedly with a status of 0xc000003a”
10.
11. and the process repeats.
Solution – You need to boot the VM off the windows disk and choose repair and command prompt. You will probably have to get into the VM bios to change the boot order to boot of the CD.
I changed the IP of my vCenter Server and now all ESX hosts are showing as disconnected from VMware Infrastructure/vSphere Client. How can I resolve this?
The ESX hosts disconnect because they store the IP address of the vCenter Server in configuration files on each of the individual servers. This incorrect address continues to be used for heartbeat packets to vCenter Server.
There are two methods to get the ESX hosts connected again. Try each one in order, testing your results each time.
Method 1
1. Log in as root to the ESX host with an SSH client.
2. Using a text editor, edit the /etc/opt/vmware/vpxa/vpxa.cfg file and change the <serverIp> parameter to the new IP of the vCenter Server.
3. Save your changes and exit.
4. Restart the management agents on the ESX. .
5. Restart the VirtualCenter Server service
Method 2
From vSphere Client, right-click the ESX host and click Disconnect.
From vSphere Client, right-click the ESX host and click Reconnect. If the IP is still not correct, go to step 3.
From vSphere Client, right-click the ESX host and click Remove. Caution: After removing the host from vCenter Server, all the performance data for that host will be lost along with the virtual machines residing on the host.
Reinstall the VMware vCenter Server agent.
Select New > Add Host.
Enter the information used for connecting to the host.
If the IP traffic between the vCenter Server and ESX host is passing through a NAT device like a firewall or router and the vCenter Server’s IP is translated to an external or WAN IP, update the Managed IP address:
From vSphere Client connected to the vCenter Server top menu, click Administration and choose VirtualCenter Management Server Configuration.
Click Runtime Settings from the left panel.
Change the vCenter Server Managed IP address.
If the DNS name of the vCenter Server has changed, update the vCenter Server Name field with the new DNS name.
Powering off a virtual machine fails with the error: Cannot power Off: Another task is already in progress
To resolve this issue:
1. Open the .vmx file of the virtual machine using a text editor.
2. Comment this line: #log.fileName = “/vmfs/volumes/4b8bd18f-c1f89b5a-1914-002219c8e7a3/vmware.log”
3. Restart the virtual machine for the changes to take effect. Note: Instead of restarting the virtual machine, you can reload the .vmx file using these commands: # vmware-vim-cmd vmsvc/getallvms This command returns the VMID. # vmware-vim-cmd vmsvc/reload <VMID>
When attempting to mount a Windows or Samba share on the service console, the following error message is reported:
smbfs: mount_data version 1919251317 is not supported.
Resolution:
ESX 4.0 uses a different syntax for the mount command. Instead of using the smbfs keyword, use the cifs keyword.
The syntax for the command is similar to:
mount -t cifs //192.168.1.100/Share /mnt/share/ -o username=xxxxx
To ensure proper directory permissions are set, apply the two following options:
dir_mode=0755 file_mode=0755
For example:
mount -t cifs //192.168.1.100/Share /mnt/share/ -o username=xxxxx,dir_mode=0755,file_mode=0755
Virtual machine fails with the error: Unrecoverable Memory Allocation Failure
Solution:
This issue occurs due to non-availability of memory for the VMX process, resulting in a virtual machine failure. The vmx process was updated to resolve this issue in ESX/ESXi 4.0 Update 2.
To resolve this issue, upgrade your product installation to ESX/ESXi 4.0 Update 2 or late
Stop Code 0x0000007B (inaccessible boot device) after upgrading a virtual machine’s virtual hardware to version 7
Solution:
This issue is caused by a failure to install the driver for the VMware Virtual disk SCSI Disk Device.
To resolve this issue:
1. Upgrade VMware Tools and reboot to confirm that the upgrade is successful.
2. Take snapshot of the virtual machine as a backup.
3. Power off the virtual machine, upgrade the virtual hardware, and power on the virtual machine.
4. Log in to the virtual machine. The drivers in the Device Manager are not installed automatically. Note: Do not touch the driver install dialogs.
5. Open Explorer and go to C:\Windows\System32\drivers.
6. Change ownership from TrustedInstaller to Administrators for disk.sys and pci.sys, then immediately add Administrators with Full control permissions to the ACL for these files.
7. Keep the Explorer window open and use a new one to navigate to C:\Windows\inf.
8. Use a third Explorer window to navigate to C:\Windows\System32\DriverStore\FileRepository. There is a long list of folders there. You need to be in disk.inf_* and machine.info_*. There may be more than one of these folders . If you do not know what one to choose, take the one with the most recent modified date.
9. From the disk.inf_* folder, copy disk.inf_* to C:\Windows\inf.
10. From the disk.inf_* folder, copy disk.sys to C:\Windows\System32\drivers.
11. From the machine.inf_* folder, copy machine.inf to C:\Windows\inf.
12. From the machine.inf_* folder copy pci.sys to C:\Windows\System32\drivers.
13. Locate and install driver software. Confirm in device management that drivers have been successfully installed.
14. Update the driver for VMware Virtual disk SCSI Disk Device. Browse computer to point to C:\Windows\System32\DriverStore\FileRepository for a search including subfolders.
15. Repeat steps 2-15 for the other devices that are listed as Unknown device in the Device Manager.
16. Reboot the virtual machine.
Deleting a snapshot fails with the error: A general system error occurred: concurrent access
This error occurs if two or more processes are attempting to access a virtual disk at the same time.
This situation of multiple processes attempting to access the virtual disk can be seen:
1. If virtual machine snapshots previously pointed at separate virtual disks are now incorrectly pointed to the same virtual disk. This situation can lead to corrupted data within the guest operating system file structure on the disks with which the snapshots are associated.
To resolve this issue, correct the virtual disk values.
2. If you are attempting to remove snapshots while VMware Data Recovery or VMware Consolidated Backup or a third-party backup solution is attempting to access the virtual disk at the same time.
Ensure that multiple processes are not accessing the virtual disk to resolve this issue.
Note
: Restarting the management agents on the ESX host may also resolve this issue.
Unable to remove the network card associated with the iSCSI VMKernel
This issue may occur if there are active I/O actions with VMKernel. To resolve this issue:
1. Ensure that there are no active sessions using the VMKernel port.
2. Turn off the software iSCSI initiator.
3. Reboot the host.
Pressing Ctrl+Alt+Del in the ESX 4.0 service console, whether the ESX host is in maintenance mode or not, causes:
All virtual machines running on the host to power down.
ESX host to reboot.
Solution:
If cannot immediately patch your ESX host, disable the option to Reboot the host on Ctrl+Alt+Del on the ESX host:
1. Log in to the ESX host via KVM, SSH or by accessing the console directly.
2. Open the file /etc/inittab using a text editor such as vi or nano.
3. Edit /etc/inittab by placing a # symbol in front of the line ca::ctrlaltdel:/sbin/shutdown -t3 -r now so that it reads: # Trap CTRL-ALT-DELETE # ca::ctrlaltdel:/sbin/shutdown -t3 -r now
4. Save the file and exit the text editor.
5. Force the configuration changes to take effect without rebooting the host, by executing init q
No comments:
Post a Comment