Deep Security 9: Error on call to ‘getaddrinfo’ (DSVA)

During a recent Deep Security implementation we’ve experienced post-deployment issues with the Deep Security Virtual Appliances (DSVA’s). After Deployment of the DSVA’s you will get a “Communications Problem” reported from the Deep Security Manager.


By the way, the Deployment of the DSVA within ESXi 5.5 has got a known issue of not working on the first attempt. Please read this for more information.

The error looks like this:


Read the full post »

Deploying DSVA on ESXi 5.5 fails.

This article describes the scenario where you will experience a timeout from vCenter while deploying the Trend Micro Deep Security Appliance 9.0 (DSVA) on an ESXi 5.5 Host. I got some information from Trend Micro about this issue, which  can simply be resolved by retrying the deployment.

So what happens? The first time you try to Deploy the DSVA (and complete the wizard):


Read the full post »

“Prepare” ESXi 5.5 Host within Deep Security 9.0

During some recent Trend Micro Deep Security 9.0 implementations I’ve come across an issue while “Preparing” the ESXi 5.5 Host for Deep Security. This process is initiated from the Deep Security Manager and should normally install the Trend Micro Filter Driver and add some settings to the ESXi 5.5 Host.

While preparing the ESXi 5.5 Host


You will eventually get the error message: “The installation transaction failed”.

This problem is described in this VMware KB article and a work around is described, however, this work around is not complete.
As VMware describes you need to manually install the Filter Driver and you need to add a Virtual Machine group named “vmservice-trend-pg” to the configuration. This VM Portgroup needs to be added to the same vswitch which has been created by vShield.

Read the full post »

BL460c Gen8: Not detecting SD-Card during ESXi 5.1 Installation

While installing new HP BL460c Gen8’s (Xeon E5-2680 v2) we discovered some strange behavior while setting the HP Power Profile setting to “Maximum Performance”. It appears that this setting causes the ESXi 5.1 Installer to not see the Internal SD-Card as storage device anymore.

Default BIOS Settings give us this, which is good:


Read the full post »

Auto Deploy: ESXi Stateful Install with multiple LUNs connected fails

Recently I have been experiencing some troubles with VMware Host Profiles, Auto Deploy and the stateful install of ESXi 5.1 (Update 1). After the Host Profile gets applied, the system eventually times out with the message: “The request failed because the remote server took too long to respond” as indicated on the screenshot below.

Host Profile Timeout

Read the full post »

Failover difference with SRM4 and SRM5 on HDS VSP

This blog post highlights the technical difference on how vCenter SRM4 and SRM5 do failover with Hitachi’s Virtual Storage Platform (HDS VSP). Credits to Saravanan Mahalingam for delivering this information.


  • SRM4 does not support failback operation
  • SRM4 uses horctakeover command for failover operation. Horctakeover command fails over to remote site and reverses the replication automatically when the volumes are in PAIR status and the remote array is online (in order to make sure the horctakeover command succeeds for TrueCopy sync, fence level of data must be used).
  • When the horctakeover command fails to reverse the replication, the S-VOL is kept in a special state called SSWS. In order to reverse the replication, pairresync -swaps must be executed manually.

 SRM 5

  • SRM5 introduced re-protect and failback operations
  • SRM5 uses pairsplit –RS command for failover operation. Hence the status of S-VOL will always be in SSWS state after the failover
  • Re-protect operation uses the pairresync command to  reverse the replication and make the volumes in PAIR state (S-VOL to the P-VOL and resynchronized the NEW_SVOL based on the NEW_PVOL)
  • Failback/personality swap operation uses the pairresync –swaps command to reverse the replication

So  while SRM4 is reversing the replication automatically, SRM5 needs the “manual” Re-protect option for that. This is important to know in case you need to guarantee replication before VM’s get booted on the Recovery Site (as part of the run book).

For more information on how to implement HDS VSP with vCenter Site Recovery Manager 5 see these resources:


vSphere Auto Deploy: Consider the Asset Tag to categorize ESXi Hosts

While designing for vSphere Auto Deploy you may want to group (categorize) your ESXi Hosts and attach a deploy rule (rule set) to them:

You specify the behavior of the Auto Deploy server by using a set of rules written in Power CLI. The Auto Deploy rule engine checks the rule set for
matching host patterns to decide which items (image profile, host profile, or vCenter Server location) to provision each host with.

So for instance, by creating a deploy rule (rule set) which matches on hardware type you could add all the same hardware to a specific cluster (with a specific host profile). But what if you got different vSphere Clusters, identical hardware and don’t want to make an exception by manually adding a pattern like hostname or mac address every time? In that case you could consider using the hardware Asset Tag to categorize your ESXi Hosts.

Read the full post »

ESXi Hostname lookup with DHCP/DNS

Within this article I will share some of the challenges faced while implementing a VMware Auto Deploy environment with DHCP for the  Management Network.

The environment uses an Infoblox DHCP/DNS appliance but contains  some good-to-know things that are not Infoblox specific, so keep on reading.

DHCP is configured to use mac-address reservations to ensure that each ESXi Host receives a static IP address. After starting ESXi from PXE Boot you will get the well-known ESXi Boot Screen:

Read the full post »

Understanding HP Virtual Connect FlexFabric Mappings with VMware vSphere

Within this article I will try to give you a clear vision on the HP Virtual Connect FlexFabric Mappings that HP uses to facilitate their blades with NIC’s and HBA’s. If you are looking for the Flex-10 mappings click here.

In our example we use the HP BL460 Gen8 blade with a FlexibleLOM (LAN on Motherboard) Adapter and no additional mezzanine cards. HP’s quote about the FlexibleLOM:

 In past server generations, the adapter is embedded on the system motherboard as a LOM (LAN on motherboard). If a customer desired a different kind of networking adapter, they would need to purchase a standup adapter and install it in a slot in the server.

Now with many ProLiant Gen8 servers, our customers can now choose the type of network adapter they want. This new Flexible LOM provides our customer the ability to customize their server’s networking of today and the ability to change to meet future needs without overhauling server infrastructure.

Read the full post »