Failed to attach filter ‘pxdacf_filter’

During a recent customer visit we had a testing environment available on where some VM’s couldn’t be powered on/vMotionned to some of the ESXi Hosts. The error message:

An error was received from the ESX host while powering on VM xxxxx.
Failed to start the virtual machine.
Module DevicePowerOn power on failed.
Unable to create virtual SCSI device for scsi1:0, ‘/vmfs/volumes/39dfa56f-83350d20/xxxxxx/xxxxxx.vmdk’
Failed to attach filter ‘pxdacf_filter’ to scsi1:0: Not found (195887107).

The error message is similar to the one which VMware is describing in this KB article around vShield Endpoint: The virtual machine power-on operation fails with this error when a virtual machine that was earlier protected by vShield Endpoint is either moved or copied to a host that is not protected by the vShield Endpoint security solution.

However this customer wasn’t using vShield in this test environment and Google didn’t got any hits on the “pxdacf_filter”. Troubleshooting eventually pointed out that some of the ESXi Hosts had Proximal Data AutoCache installed and VM’s that are accelerated by Proximal contain the following lines in their .vmx file:

scsix:x.filters = “pxdacf_filter”

Which obviously caused the VM to be unable to power-on on an ESXi Host without Proximal Data AutoCache installed.

Deep Security 9: Error on call to ‘getaddrinfo’ (DSVA)

During a recent Deep Security implementation we’ve experienced post-deployment issues with the Deep Security Virtual Appliances (DSVA’s). After Deployment of the DSVA’s you will get a “Communications Problem” reported from the Deep Security Manager.

Communications_Problem

By the way, the Deployment of the DSVA within ESXi 5.5 has got a known issue of not working on the first attempt. Please read this for more information.

The error looks like this:

Agent-Appliance-Error01

Read the full post »

Deploying DSVA on ESXi 5.5 fails.

This article describes the scenario where you will experience a timeout from vCenter while deploying the Trend Micro Deep Security Appliance 9.0 (DSVA) on an ESXi 5.5 Host. I got some information from Trend Micro about this issue, which  can simply be resolved by retrying the deployment.

So what happens? The first time you try to Deploy the DSVA (and complete the wizard):

Deploy_Appliance

Read the full post »

“Prepare” ESXi 5.5 Host within Deep Security 9.0

During some recent Trend Micro Deep Security 9.0 implementations I’ve come across an issue while “Preparing” the ESXi 5.5 Host for Deep Security. This process is initiated from the Deep Security Manager and should normally install the Trend Micro Filter Driver and add some settings to the ESXi 5.5 Host.

While preparing the ESXi 5.5 Host

Prepare_ESX

You will eventually get the error message: “The installation transaction failed”.

This problem is described in this VMware KB article and a work around is described, however, this work around is not complete.
As VMware describes you need to manually install the Filter Driver and you need to add a Virtual Machine group named “vmservice-trend-pg” to the configuration. This VM Portgroup needs to be added to the same vswitch which has been created by vShield.

Read the full post »

BL460c Gen8: Not detecting SD-Card during ESXi 5.1 Installation

While installing new HP BL460c Gen8’s (Xeon E5-2680 v2) we discovered some strange behavior while setting the HP Power Profile setting to “Maximum Performance”. It appears that this setting causes the ESXi 5.1 Installer to not see the Internal SD-Card as storage device anymore.

Default BIOS Settings give us this, which is good:

SD-Card

Read the full post »

Auto Deploy: ESXi Stateful Install with multiple LUNs connected fails

Recently I have been experiencing some troubles with VMware Host Profiles, Auto Deploy and the stateful install of ESXi 5.1 (Update 1). After the Host Profile gets applied, the system eventually times out with the message: “The request failed because the remote server took too long to respond” as indicated on the screenshot below.

Host Profile Timeout

Read the full post »

Failover difference with SRM4 and SRM5 on HDS VSP

This blog post highlights the technical difference on how vCenter SRM4 and SRM5 do failover with Hitachi’s Virtual Storage Platform (HDS VSP). Credits to Saravanan Mahalingam for delivering this information.

SRM 4

  • SRM4 does not support failback operation
  • SRM4 uses horctakeover command for failover operation. Horctakeover command fails over to remote site and reverses the replication automatically when the volumes are in PAIR status and the remote array is online (in order to make sure the horctakeover command succeeds for TrueCopy sync, fence level of data must be used).
  • When the horctakeover command fails to reverse the replication, the S-VOL is kept in a special state called SSWS. In order to reverse the replication, pairresync -swaps must be executed manually.

 SRM 5

  • SRM5 introduced re-protect and failback operations
  • SRM5 uses pairsplit –RS command for failover operation. Hence the status of S-VOL will always be in SSWS state after the failover
  • Re-protect operation uses the pairresync command to  reverse the replication and make the volumes in PAIR state (S-VOL to the P-VOL and resynchronized the NEW_SVOL based on the NEW_PVOL)
  • Failback/personality swap operation uses the pairresync –swaps command to reverse the replication

So  while SRM4 is reversing the replication automatically, SRM5 needs the “manual” Re-protect option for that. This is important to know in case you need to guarantee replication before VM’s get booted on the Recovery Site (as part of the run book).

For more information on how to implement HDS VSP with vCenter Site Recovery Manager 5 see these resources:

 

vSphere Auto Deploy: Consider the Asset Tag to categorize ESXi Hosts

While designing for vSphere Auto Deploy you may want to group (categorize) your ESXi Hosts and attach a deploy rule (rule set) to them:

You specify the behavior of the Auto Deploy server by using a set of rules written in Power CLI. The Auto Deploy rule engine checks the rule set for
matching host patterns to decide which items (image profile, host profile, or vCenter Server location) to provision each host with.

So for instance, by creating a deploy rule (rule set) which matches on hardware type you could add all the same hardware to a specific cluster (with a specific host profile). But what if you got different vSphere Clusters, identical hardware and don’t want to make an exception by manually adding a pattern like hostname or mac address every time? In that case you could consider using the hardware Asset Tag to categorize your ESXi Hosts.

Read the full post »

ESXi Hostname lookup with DHCP/DNS

Within this article I will share some of the challenges faced while implementing a VMware Auto Deploy environment with DHCP for the  Management Network.

The environment uses an Infoblox DHCP/DNS appliance but contains  some good-to-know things that are not Infoblox specific, so keep on reading.

DHCP is configured to use mac-address reservations to ensure that each ESXi Host receives a static IP address. After starting ESXi from PXE Boot you will get the well-known ESXi Boot Screen:

Read the full post »