Understanding HP Flex-10 Mappings with VMware ESX/vSphere

I’ve written this blog as an add-on to Frank Denneman’s blog about Flex-10 which you can find over here.
Goal of this blog is to get a clear vision about the Flex-10 port mappings that HP uses to facilitate their blades with NIC’s, with the special focus towards VMware ESX/vSphere.

If you are looking for the HP FlexFabric mappings click here.

First we start of looking at the “NIC to Interconnect“-mappings. These are pretty straight forward and should be known to all HP c-Class Administrators.
In our example we use HP BL460 G6 blades with 4 Flex-10 NIC’s (two onboard and two provided via a Dual Port Mezzanine Card)

Please note that the connections that are drawn below are hardwired connections on the Backplane of the HP c7000 Enclosure.

HP Blade Connections toward Interconnect Modules

(The reason that we use Mezzanine Slot 2 instead of Slot 1 is due to the fact that we have other servers in the enclosure as well that already have a connection via Mezzanine Slot 1)

So, our VMware vSphere Host is physically equipped with 4 10GB NIC’s so you would expect to see 4 vmnic’s in ESX right?….. Wrong!
The HP Virtual Connect Domain virtualizes each 10GB NIC and creates 4 FlexNics for it. After doing some math 😉 we can conclude that we will get  16 vmnic’s in our ESX Host.

The image below shows us that we get 4 FlexNics per Port and how these FlexNics correspond to a vmnic from within ESX.

FlexNic to vmknic mapping

So in the image above we see that for example Port 1 from the Onboard Adapter is divided into 4 FlexNics: 1A, 1B, 1C and 1D.
PCI numbering (and thus the order in which the vmnic’s are numbered within ESX) is based on 1A (onboard), 2A (onboard), 1B (onboard), 2B (onboard), 1C (Onboard), 2C (Onboard) etc.

Notice that the first 8 vmnic’s are from the Onboard Card and the second 8 vmnic’s are from the Mezzanine Card.

From within the HP Virtual Connect Manager we can divide the available 10 GB speed over those 4 FlexNics, for example we can give 1A (vmnic0) 1GB, 1B (vmnic2) 7GB, 1C (vmnic4) 1GB which will leave us with 1GB to give out for 1D (vmnic6).

Bandwidth Allocation

Since vSphere has much better iSCSI performance than ESX 3.5  did, we decided to use the full 10GB bandwidth to connect the LeftHand iSCSI storage. Technically this means that we give 1 FlexNic 10GB which leaves us with 0GB to share among the other 3 FlexNics remaining (per port).

The image below shows how the technical design looks now:

FlexNic to vmknic mapping with 10 GB iSCSI

From a Virtual Connect Manager perspective we used the following settings in the attached Server Profile (see image below)

Virtual Connect Server Profile

Pleaste note that we defined all 16 NIC’s and left 6 of them “Unassigned”.

The “Unassigned”-ones are the FlexNics from Mezzanine Slot 2 which didn’t got any bandwidth assigned to them as you can see in the “Allocated Bandwidth”-column.
So for iSCSI we selected MZ2:1-A en MZ2:2-A as the 2 links with 10 GB allocated, leaving 0GB for MZ2:1-B, MZ2:2-B etc etc.

The final picture from vSwitch perspective looks like this, where we separated:

-Service Console (1GB – vSwitch0)
-VMotion (7GB – vSwitch1)
-Fault Tolerance (1GB – vSwitch2)
-VM Network’s (1GB – vSwitch3)

And gave the full 10GB to the iSCSI Storage. (vSwitch4)

vSwitch Perspective

Please note that the above design contains two single point of failure’s, whenever the Onboard NIC fails my whole front-end fails (same story whenever the Mezzanine Card fails, in that case my whole storage will be lost.)
Customer constraints however kept me from doing it the way displayed in the image below (which obviously is technically the best way). In the image below we also cover hardware failure from either the Onboard or Mezzanine Card.

Without the Single Point of Failures

So now that I’ve explained the mappings from Virtual Connect (FlexNics) towards ESX (vmnics) lets take a look at the rest of the Virtual Connect Domain configuration.

There are two Shared Uplink Sets (SUS) created:

–         FRONTEND which controls the COS, VMotion, VM Networks and Fault Tolerance;
–         STORAGE which controls the physically separated iSCSI Storage LAN.

The “FRONTEND”-SUS is connected via 4 10GB connections towards two Cisco 6509. (20GB Active/20GB Passive)
The “STORAGE”-SUS is connected via 4 10GB connections towards two Cisco Nexus 5000’s (20GB Active/20GB Passive)

Virtual Connect Shared Uplink Sets

Word of advice: It’s recommended to use Portfast-settings on the endpoints of the Shared Uplink Set-connections. While doing failover tests we noticed that our networking department didn’t turned on Portfast as we had requested which resulted in spanning tree kicking in whenever we powered on a Virtual Connect Module.

Word of advice: Next issue we ran into where some CRC errors in the Virtual Connect Statistics (while the Cisco’s didn’t register any CRC errors). These errors disappeared when we defined the Shared Uplink Sets as 10GB static speed instead of “auto”.

Last word of advice: while implementing a technical environment like this it’s crucial to test every possible failure, from single ESX Host to all the separate components. I’ve wrote very detailed documents about it and it helped us discover a very strange technical problem which I’m currently investigating.

Update 04-12-2009: Since this post is one of my top articles I decided to write more about the extensive testing, read all about it here

For those who are interested I’ll explain the strange technical problem to close off of this blog:

Whenever a Virtual Connect Module fails, the downlinks towards ESX will fail as well (since these are hardwired via c7000 Backplane). So for example, whenever the module from Interconnect Bay 1 fails, vmnic0, 2,4 and 6 will fail as well, causing a failover to be initiated from within ESX (whenever configured correctly obviously 😉

When the module is powered on again the vSwitch uplinks will be restored. This is the case for Interconnect Bay 1 and 2 but this isn’t the case for Interconnect Bay 5 and 6. Whenever we simulate a device failure on Interconnect Bay 5 or 6 we obviously lose our corresponding vmnic’s connections but they won’t come back online when we power on the Interconnect unit again. Currently the only way to get our connection back is to reboot the whole ESX Host. I’m currently working on firmwares as it seems that this will resolve my issue. I’ll keep you guys posted.

Problem update 02-10-2009
For the last weeks I’ve been swapping e-mails with VMware Support since updating the HP firmwares (Virtual Connect and all the other components)  didn’t solve the problem.

First of all VMware gave me an alternate Broadcom bnx2x driver which unfortunately didn’t solve the problem. Next step was to unload the bnx2x module and start it again in debugging mode so they could get some more information than just the vmkernel that gets flooded with cpu7:4215)<3>bnx2x: vmnic10 NIC Link is Down entries.

So I’ve enabled debugging mode on the bnx2x module, rebooted the Interconnect Module again and remarkably my vmnic connection got restored this time! I did the test again without the “enable debugging”-mode and my connection never gets restored. Very odd!

I passed out the new details towards VMware and currently awaiting there response. To be continued…


Leave a comment


  1. Great job in simplifying the Flex10. I’ve got you linked on my site (BladesMadeSimple.com). I’m curious about a couple of things:

    a) how is the performance of 10Gb Ethernet – especially as carved up by Flex10?
    b) what value is it to connect to the Cisco Nexus 5000 if the Flex10 NICs only run at 10Gb Ethernet (lossy) speeds?

  2. Maybe I don’t understand the concept, but what happens if mezanine slot 2 dies or the onboard module dies? wouldn’t you have an issue because there’s no redundancy?

  3. SpaceDeep

     /  November 5, 2009

    I agree with you Duncan. This configuration is not really good. If mezzanine or onboard slot dies you will lose connectivity. I cannot see any redundancy in this concept. My opinion is that you should create same configuration on both slots (mezzanine and onboard). And there is another mystery for me. Why should we use 7Gb connection for VMotion?
    My idea for this design is:
    Port 1-1 SC net 1 Gb
    Port1-2 VMotion 1 Gb
    Port1-3 FT 2 Gb
    Port1-4 Other Networks 6 Gb
    Port 2-1 Storege 10 Gb
    Mezz 2:
    Port 1-1 SC net 1 Gb
    Port1-2 VMotion 1 Gb
    Port1-3 FT 2 Gb
    Port1-4 Other Networks 6 Gb
    Port 2-1 Storege 10 Gb
    What do you think about this conf Duncan?

  4. Kenneth van Ditmarsch

     /  November 5, 2009

    Hi Duncan,

    Completely correct, they both are a SPOF which has been explicitly choosen for by the customer.
    The reason for that was that the first design drawings were based on the HP BL460 G1 which only had 10GB on the mezzanine card (thus leaving us with no other option than to team on 1 dual-port NIC)

    Since this drawing was presented to all the technical people (and it took them a long time to understand the Blade technology’s), customer constraints demanded that we teamed the same way purely for the technical understanding.
    In this case it’s rather simple that Interconnect Bay 1 is redundant to Interconnect Bay 2 (horizontal) and IB3 is redundant to IB4 etc.

    So yes, I would have designed it otherwise but constraints keep me from doing so. Good point though since I didn’t mentioned this in the Blog.
    (actually my only target was to define the port mappings but in my enthusiasm the article kept on growing 😉


  5. Kenneth van Ditmarsch

     /  November 5, 2009

    Correct, but customer constraints kept me from doing so.
    I will note this in the blog 🙂

    BTW, concerning the VMotion. We are running VM’s with 16 GB memory which we wanted to transfer as fast as possible.
    Network statistics showed that the VM’s aren’t running high on networking. Whenever in the future we need to shuffle with the speeds this can be done dynamically (which was one of the customer constraints as well)

  6. Kenneth van Ditmarsch

     /  November 5, 2009

    Hi Kevin,

    Thanks and thanks for your comment 🙂

    Your first question about the speed, we ran some 100% sequential reads with
    4 KB block sizes to max out the performance towards the LeftHand Nodes with 220 MB/s. Our current bottleneck are the storage nodes which are unfortunately running on 2x 1Gb connection. So in current setup we cannot max out the complete bandwidth.

    Concerning the second question, the Cisco Nexus 5000 was one of the things that was already available for us to connect to.

  7. Spacedeep,

    The article written by Kenneth is not a blueprint for every flex10 environment.
    He just describes this particular design. You are free to use the standard 1GB linespeed for VMotion.

    Good article Kenneth!
    It’s nice to see an article that shows that often office politics and conflicting agenda’s between server and network tribes can lead to a suboptimal design.

    Being a good designer sometimes means you must swallow your pride and design a environment that aligns with the needs and requirements of the customer, instead of an optimal technical design. Explain the issues, document your objection and deliver the design if your objections are waived by the customer.

  8. And don’t get me wrong, it’s an excellent article!! I love the diagrams and the way you describe the concept.

  9. SpaceDeep - Albin Penic

     /  November 5, 2009

    Frank as Duncan says. Don’t get me wrong. Description of Flex-10 is really nice. I just post mine opinion about network design in virtual environment. As you can see in previous comment even Kenneth did agree with me and Duncan that configuration can be an issue. Frank we all try to share our knowledge for good of everyone.

  10. Bill

     /  November 6, 2009

    Thank you. For someone new to Flex-10, blades, and VMware, your diagrams have been very helpful.

  11. Hi,

    did you get the issues with the VC Modules 5 and 6 resolved? We had something similar with other Modules and it with a problem with the nic (in this case a normal 1 GB Dualport from Intel).

    Just wonder if there is a bigger underlying issue here.

    Marcus Breiden

  12. Kenneth van Ditmarsch

     /  November 10, 2009

    Hi Marcus,

    No unfortunately not. I did a complete update on the enclosure (taking the HP compatibility matrix into account) and VCM (2.30) but this unfortunatly didn’t solve the problem.
    I’ve requested time to install Windows on a BL460 G6 and configure network the same as under ESX, that way I can tell if this is a Hardware problem of a VMware problem.

    I’m only seeing this behavior on G6 blades, our G1 blades are working correctly with Windows/VMware

  13. Hmm… we did see the same issue in our case in Windows, disabling the network card and enabling it again did solve the issue.

    So let me explain our issue a little and you can compare it:

    We did implement C7000 with BL460c with and 6 VC with 2 CX2 Uplinks Modules as Module 1,2,5,6,7 and 8. I will have to look up the correct Modellnumbers. The blades where using the Intel Quadport Nics

    In VCM Module 5 and 6 didn’t show any Server Profiles, Module 7 and 8 did show the Profiles correctly.

    If you disabled one of the Modules (5-8) and did enable it again one of the servers (windows or vmware) would get an error, most of the time the same server with THAT module. The link of the nic would go up, but the connection wouldn’t work.

    VCM was showing that the link was not connected but the Link of the Nic was up in the OS.

    Only disabling the Nic and enabling it again (in windows) or unloading the nic driver in esx would solve the issue, or a reboot ofcourse.

    We are now using Broadcom Quad Nics (NC325m) and the issue is gone.

    It did take us quite some while to troubleshoot because of the strange symptons till we figured out what it was. Even HP and VMware weren’t able to troubleshoot it at the beginning, only one of the Senior Support guys of HP who was onsite did help us figure the issue out and solve it.

    So is that similar to what you are seeing or something totally different. You didn’t describe the problem to well 🙂

    Marcus Breiden

  14. Kenneth van Ditmarsch

     /  November 12, 2009

    Hi Marcus,

    Let me see if I can get this any clearer:
    – HP BL 460 G6 with vSphere
    – Interconnect 1 to 6 are equipped with modules (Interconnect 1, 2, 5 and 6 contain a Flex-10 Module)

    Whenever I powerdown IC5 or IC6 then obvioulsy the downlinks towards mezzanine slot 2 will disappear (visible from the OS as a failing NIC connection.)

    So, we have these BL460 G6’s installed with vSphere. vSphere detects (and logs) that the vmnic connection is failing. Whenever we power on back the IC module (5 of 6) vSphere keeps on logging that the connection is failing.
    This behavior does not occur when we for instance powerdown IC1 of 2 (while the modules are the same)

    To clarify if this is a HP problem or a VMware problem I just installed Windows 2008 on the BL460 G6 and kept the VC Profile exactly the same. Powering down IC 5 (or 6) causes a link failure and powering on causes the link to be restored (as it should be) within Windows 2008. So for me it’s clear that I need to register a support call with VMware for this issue.

  15. Hi Kenneth,

    yeah I totally agree, but this is also noteworthy and important. THanks for clarifying the issue.

    The Links in VCM show that they are linked I would assume. Did you try unloading and loading the drivers in esx?

  16. Kenneth van Ditmarsch

     /  November 12, 2009

    Yes, VCM is linked. I’ve only tried to restart the network stack but that didn’t solve my problem.
    Going to make a call now.

  17. Michael J

     /  November 17, 2009

    Hi Kenneth,

    Just wanted to say thanks for the valuable info; especially around the words of advice. Im experiencing the same issues you mention in all 3 comments, so its great to see there is some light at the end of the tunnel. Thanks also for the detailed information on Flex 10 and explanation of how it works. I have one question regarding your network environment. Im assuming that the Cisco’s you are referring to are 6500 series? Jsut wanted to clarify that.



  18. Kenneth van Ditmarsch

     /  November 17, 2009

    Hi Michael,

    No thanks 🙂 Correct, i’m referring to 6500 series. Are you also experiencing “the link that doesn’t come back” from the Interconnect modules?

    Kenneth van Ditmarsch

  19. Michael J

     /  November 17, 2009

    Strike that…. just realised it does say 6509. Read the fine print Michael….

  20. Michael J

     /  November 17, 2009

    Hi Kenneth,

    THanks for that. We arent experiencing that exact issue where the link doesnt come back from the interconnect, however we have experienced the issue where: “Whenever a Virtual Connect Module fails, the downlinks towards ESX will fail as well (since these are hardwired via c7000 Backplane).”

    We found that if you reset the module, it all comes back up and connectivity is restored, however after a period of about 3 hours with both vc modules powered on, it fails again and eventually all of the blades within the chassis (including the ESX hosts) lose network connectivity. We then have had to power off one of the vc modules and all connectivity is restored, which is why I am interested in the posts you about portfast settings on the Cisco 6509’s. Have also been advised to enable Smartlink within virtual connect and also to enable LACP on the Cisco’s as per: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00865618/c00865618.pdf (pg49)

    We are currently having this issue in our Production environment and have resorted (at this point) to using on flex 10 vc module until we ascertain the root cause. ANy help would be appreciated!



  21. Kenneth van Ditmarsch

     /  November 17, 2009

    Rather strange that after 3 hours both Interconnect Modules fail. Doesn’t sound like a Spanning Tree issue to me than 😉
    Which FW are you running?
    I noticed some remarks on “dropping connections” within FW 2.12 and 2.30 (last week I upgraded towards 2.30 and yesterday I saw that they already released 2.31…)

    In my design we explicitly don’t use smart links since losing a FC uplink causes the VC modules to failover on hardware level (so my ESX can keep his both uplinks)

  22. Michael J

     /  November 17, 2009

    Sorry Kenneth, to clarify its not that both vc’s fail, it seems that the active vc module has a failure of some description, but the standby vc module doesnt see a failure and therefore doesnt take over, so the active vc module seems to be “half failed”. However we do see that it does fail completely over that 3 hour period, which as you said is strange. We are running firmware v 2.12 and like you I noticed HP released 2.31 which we have yet to upgrade to, so that could be possible as well. THanks for clarifying regarding smartlink 🙂

  23. Michael J

     /  November 19, 2009

    Hi Kenneth,

    An update on my situation. This was escalated to HP and as it turns out we resolved it ourselves before HP had even done their diagnosis. We upgraded the f/w to 2.31 and it resolved the issue. However, it wasnt the firmware itself that fixed the issue; rather it seemed that when we upgraded to 2.12, despite the vc firmware update CLI utility reporting that the update was successful, it appears that the image that was copied to the vc module was incomplete / corrupted. So when we upgraded again to 2.31, the image upload was ok. Might be worthwhile doing the MD5 check on the image file when downloading from HP. Its fixed but just an awareness for everyone 🙂

  24. Kenneth van Ditmarsch

     /  November 27, 2009

    As an update on my own troubleshooting; VMware has handed me a new Broadcom NIC driver which I will test in the environment next week.

  25. Ron

     /  December 3, 2009

    I think people are making too much of potential failures in the mezz2 flex 10. Failures rarely happen in our current gen HP hardware and if they do they happen sooner than later due to bad production parts. Besides if you’re like us you have these hosts sitting in large HA clusters thus the reason for paying all that money.

  26. Richard Boswell

     /  January 19, 2010

    Michael J,

    It’s possible your issue wasn’t fully firmware-based. We have had similar issues but reloading FW didn’t fix it. The VC and OA modules are based off of BusyBox using a custom version of Linux with two primary modules built by HP called VCETH and VCM. VCETH is the low-level module that provides L1/L2 services, whereas VCM provides L3 and mgmt functionality. We had “soft-failures” like you mention, where throughput isn’t stopped but slowly withers away but we are unable to manage/connect to the VC modules either by HTTPS or SSH. We have been able to connect via the serial interface, restart the VcM domain that way, and regain mgmt functionality. Kinda strange but we have monitors set up now to alert on it.

    We noticed this trend on several different FW revisions (2.10, 2.12, and 2.31).

  27. Nate

     /  June 16, 2010

    What is the MTU set to on your NICs that are having the failover issues?

    we had issues when we set the MTU to 9000.

    I think the issue ended up being that the Mezz Nics only support Jumpo frames up to 4K.
    Early Firmware let you set MTU to 9000, even though it wasn’t supported.
    Later Firmwares only let you set MTU to 4000, which looks like they were taking features away- but in fact, they were just correcting a cosmetic issues.

    Once I turned MTU down to sub-4000 on the driver NIC level, we didn’t have to re-power a server in order to re-gain link on the second Flex.

  28. Hi Nate,

    We were indeed using 9000. When I left this site I got noticed that indeed the BroadCom’s didn’t support 9000.
    This apparently was fixed in Windows drivers as well since we had some Windows Blades on which we could set the MTU to 9000 and some that were limited to 4000 (via the Windows driver which was newer)

    At that time I passed on this information to the guy who was replacing me at that site and to be honest I don’t know the latest status on this issue.
    Whenever I return from holiday I’ll check the call with VMware/HP

  29. Kenneth, great work and thank you for the immense detail in this post. I have to say, as UCS guy, the complexity of HP makes me feel a little ill, but that’s not why I’m posting 🙂

    I’m interested in the test documents you wrote because I think if we merged our docs we could have a standard test harness for HP, Cisco, et al. If we could then capture real data it using the same methodology, then we would have some very interesting scientific data on how these different solutions work (or don’t work!).

    I guess it’s meta-testing, where we have a test of “NIC failure” (e.g. do an administrative shutdown?) or “Northbound link failure” (e.g. unplug cable from system to northbound DC LAN) – the actual technical procedure to execute that test step might differ (e.g on how to do an admin shutdown of a port) but the result is similar enough, and then we can capture the data on how the system behaves (e.g. propagate Link Down status to NICs, if appropriate).

    Just a thought, but I realise we are all busy so perhaps this isn’t a top item… but I’m doing this for UCS anyway so might be interesting to compare notes at least?

    Thanks for the detailed post – KEEP ‘EM COMIN’!

    🙂 Steve

  30. Andrew VanSpronsen

     /  November 26, 2010

    Being an early adopter of Flex-10 myself I am curious if you considered leveraging vDS over Flex-10 . We didn’t have this option initially but am leaning heavily towards vDS, or a Nexus 1K possibly, now that it has traffic shaping and NetIOC. It seems more flexible than “Flex Nics” which only have rate limiting on the egress data path.

  31. Hi Andrew,

    It indeed sounds more flexible however I’m not quite sure that kind of new features HP VCM has in the new firmware update so it’s hard for me to judge on that without knowing all the insight outs.

  32. I’ve also been working heavily with HP Flex-10 and ESX.

    I’ve done a blog post on a Flex-10 design for ESX to throw some more information into the mix.

  33. very nice post.
    unfortunally the pictures are to small to read

  34. Interesting, this must have happened when updating to the new WordPress version.
    Let me look into that, thanks for the notice!

    Update: Somehow the source code is correct when looking at the the image size. Dunno what’s happening here, will have to dive in that unfortunately 🙁

  35. Darren

     /  July 12, 2012

    Hi ,

    We are putting in windows blades but getting presented with 8 nics in windows , which is correct as per your explanation above , but only want to have 2 nic’s shown in windows. Have tried changing profile settings in VC but still 8 appear , any idea how to restrict the number of nic’s visable to windows.


  36. As far as I know (and I haven’t worked with the latest FW versions of the Virtual Connect modules) this wasn’t possible and the unused NIC’s are visible but disconnected from a Windows perspective, right?

  37. John S.

     /  July 27, 2012

    I was told by HP that in vmware the vc modules become Active/Active.

  38. Nina

     /  March 29, 2013

    Hi Kennith,

    Thank you for your article! Great diagrams. One question, is there a reason you only created 2 SUS for an active/passive configuration versus creating 4 SUS 2 for each VC (iscsi,esx traffic) so that you can have an active/active config?

  39. Initially there wasn’t an option to create an active/active SUS spread out on two different (physical separated) VC modules.
    Does that answer your question?

  40. Michael DiMarzio

     /  August 9, 2013


    Great write up by the way. We have a situation where two chassis are interconnected together based on the HP Multi Enclosure document. We found the same problem with CRC’s, but most of them are seen on the Cisco 4900m with 10Gb uplinks. I’m going to try making the inteface Full 10Gb from the VC configuration and see how that works. When I fail over the ACTIVE side I get CRCs on the Cisco, but on the newly active Cisco switch… very weird. I’ll update with my findings later.

  41. Yves W

     /  November 21, 2013


    What are your experiences with the fully redundant setup (the one without the 2 single points of failure)? We are implementing a setup with C7000 enclosures with 4 HP Flex10-d modules and bl406c Gen8 blades. We are currently planning to send NFS traffic over the mezzanine nics and the “regular” network traffic over the LOM nics. But this obviously has the single points of failure you discuss.

  1. Testing Scenario’s VMware / HP c-Class Infrastructure « VirtualKenneth's Blog
  2. Unresponsive HP Virtual Connect Manager – vcutil « VirtualKenneth's Blog
  3. Testing Scenario's VMware / HP c-Class Infrastructure | VirtualKenneth's Blog
  4. Understanding HP Flex-10 Mappings with VMware ESX/vSphere « VirtualKenneth's Blog
  5. [Technology Overview] HP Flex-10 c-class avec VMware vSphere4 « VMnerds Blog
  6. When a friend emails you a link | Trekker Boy
  7. HP Virtual Connect Flex 10 Technology | Electric Monk
  8. HP刀片服务器系统Flex-10 VC配置与VMware vSphere网络设计 « Delxu's Blog
  9. Understanding HP Virtual Connect FlexFabric Mappings with VMware vSphere | VirtualKenneth's Blog - hqVirtual | hire quality

Leave a Reply

%d bloggers like this: