Auto Deploy: ESXi Stateful Install with multiple LUNs connected fails

Recently I have been experiencing some troubles with VMware Host Profiles, Auto Deploy and the stateful install of ESXi 5.1 (Update 1). After the Host Profile gets applied, the system eventually times out with the message: “The request failed because the remote server took too long to respond” as indicated on the screenshot below.

Host Profile Timeout

Read the full post »

Failover difference with SRM4 and SRM5 on HDS VSP

This blog post highlights the technical difference on how vCenter SRM4 and SRM5 do failover with Hitachi’s Virtual Storage Platform (HDS VSP). Credits to Saravanan Mahalingam for delivering this information.

SRM 4

  • SRM4 does not support failback operation
  • SRM4 uses horctakeover command for failover operation. Horctakeover command fails over to remote site and reverses the replication automatically when the volumes are in PAIR status and the remote array is online (in order to make sure the horctakeover command succeeds for TrueCopy sync, fence level of data must be used).
  • When the horctakeover command fails to reverse the replication, the S-VOL is kept in a special state called SSWS. In order to reverse the replication, pairresync -swaps must be executed manually.

 SRM 5

  • SRM5 introduced re-protect and failback operations
  • SRM5 uses pairsplit –RS command for failover operation. Hence the status of S-VOL will always be in SSWS state after the failover
  • Re-protect operation uses the pairresync command to  reverse the replication and make the volumes in PAIR state (S-VOL to the P-VOL and resynchronized the NEW_SVOL based on the NEW_PVOL)
  • Failback/personality swap operation uses the pairresync –swaps command to reverse the replication

So  while SRM4 is reversing the replication automatically, SRM5 needs the ”manual” Re-protect option for that. This is important to know in case you need to guarantee replication before VM’s get booted on the Recovery Site (as part of the run book).

For more information on how to implement HDS VSP with vCenter Site Recovery Manager 5 see these resources:

 

vSphere Auto Deploy: Consider the Asset Tag to categorize ESXi Hosts

While designing for vSphere Auto Deploy you may want to group (categorize) your ESXi Hosts and attach a deploy rule (rule set) to them:

You specify the behavior of the Auto Deploy server by using a set of rules written in Power CLI. The Auto Deploy rule engine checks the rule set for
matching host patterns to decide which items (image profile, host profile, or vCenter Server location) to provision each host with.

So for instance, by creating a deploy rule (rule set) which matches on hardware type you could add all the same hardware to a specific cluster (with a specific host profile). But what if you got different vSphere Clusters, identical hardware and don’t want to make an exception by manually adding a pattern like hostname or mac address every time? In that case you could consider using the hardware Asset Tag to categorize your ESXi Hosts.

Read the full post »

ESXi Hostname lookup with DHCP/DNS

Within this article I will share some of the challenges faced while implementing a VMware Auto Deploy environment with DHCP for the  Management Network.

The environment uses an Infoblox DHCP/DNS appliance but contains  some good-to-know things that are not Infoblox specific, so keep on reading.

DHCP is configured to use mac-address reservations to ensure that each ESXi Host receives a static IP address. After starting ESXi from PXE Boot you will get the well-known ESXi Boot Screen:

Read the full post »

Understanding HP Virtual Connect FlexFabric Mappings with VMware vSphere

Within this article I will try to give you a clear vision on the HP Virtual Connect FlexFabric Mappings that HP uses to facilitate their blades with NIC’s and HBA’s. If you are looking for the Flex-10 mappings click here.

In our example we use the HP BL460 Gen8 blade with a FlexibleLOM (LAN on Motherboard) Adapter and no additional mezzanine cards. HP’s quote about the FlexibleLOM:

 In past server generations, the adapter is embedded on the system motherboard as a LOM (LAN on motherboard). If a customer desired a different kind of networking adapter, they would need to purchase a standup adapter and install it in a slot in the server.

Now with many ProLiant Gen8 servers, our customers can now choose the type of network adapter they want. This new Flexible LOM provides our customer the ability to customize their server’s networking of today and the ability to change to meet future needs without overhauling server infrastructure.

Read the full post »

vSphere Authentication Proxy: Failed to bind CAM website with CTL

While troubleshooting the vSphere Authentication Proxy (vSphere 5.1) I ran into a bug which will I highlight in this article.

After successfully installing the Authentication Proxy I discovered an error message in the C:\ProgramData\VMware\vSphere Authentication Proxy\logs\camadapter.log:

2012-xx-xx 14:03:35: Failed to bind CAM website with CTL
2012-xx-xx 14:03:35: Failed to initialize CAMAdapter.

Read the full post »

Unable to enable the vSphere Authentication Proxy Plug-in

I recently had some issues getting the vSphere Authentication Proxy Plug-in to be enabled within vCenter Server 5.1. This is the error message that was displayed: “The server could not interpret the client’s request. (The remote server returned an error: (404) Not Found.)” 

 

Read the full post »

View a .vpf file (VMware Profile Format)

This article describes a very easy method on how to fashionable view a VMware Profile Format file (.vpf) that is used with VMware Host Profiles.

Why would you like to read the .vpf file?
Certain troubleshooting scenarios would ease you work if you are able to search for a specific content within the Host Profile

So how do we view this .vpf file?

  1. Export the Host Profile from the vCenter Server, this will give you the .vpf file;
  2. Rename the profile.vpf to profile.xml (so change the extension);
  3. Open the profile.xml with for instance  Google Chrome or Internet Explorer.
Google Chrome gives you a very easy overview like displayed below:

 

Migrate to new vCenter Server while using dvSwitches

During vSphere 4.x to 5.x updates I occasionally run into the situation where dvSwitches are used on the current vCenter4  and the customer wants to install a fresh copy of vCenter5 (with a new dvSwitch).

Within this article I will describe a minimum downtime (1 ping-loss or less) procedure to cover this, guided by pictures to get a clear understanding.

Step 1 – Starting point
First of all we got the ESXi Host and VM’s connected to the dvSwitch.

Read the full post »