(Not so) happy Monday morning!
First request of the day – one of the VM’s was unresponsive and had to be rebooted (RDP was hanging and no incoming connections were allowed). I’m thinking not a big deal so VM was rebooted but I have noticed that VMware Tools were not up to date so thought why not update them at the same time? This is where it all started to go wrong… At this point VM was not responding (which I knew about) AND VMware Tools install was totally stuck on some random percentage in Recent Tasks. Great start.
Few things I have tried before finding the proper solution:
Trying to cancel the install by right clicking on Initiated VMware Tools Install or Upgrade in Recent Tasks didn’t yield any results. Option was greyed out.
Powering off the VM was unsuccessful as well. Again, options to Power Off, Reset etc. were greyed out.
Ejecting the .iso file that’s responsible for VMware Tools install didn’t help since the .iso was not connected.
Following on from my last post talking about How to update mpt2sas driver on ESXi 5? today we are going to look at updating network drivers for Broadcom and Intel NICs on VMware ESXi host. Procedure documented below will work with any version of ESXi 4.x and 5.x
Lets start by listing all network interfaces in “Up” state:
esxcfg-nics -l | grep Up
As you can see there are 10 network adapters in “Up” state which happens to be total as well on this host – 4 Broadcom 5709s and 6 Intel 82576s. Portion of the screenshot that we’re particularly interested in is just before the “Up” word i.e. bnx2 and igb – these are driver names that ESXi is using for our network cards. Now that we have this established lets look at the version of said drivers:
I have been entertaining HP ProLiant MicroServer N36L for nearly a year now. Great machine for the money and with cost of around £120 (after the cash back) it was an absolute bargain at the time! Box itself has been upgraded to 8GB of DDR3 RAM (maximum the motherboard can take) and its running ESXi 4.1 U1 absolutely fine. Disk space wise, there is only 30 GB Vertex SSD for few VMs (and .vswp files), rest of the storage needs is provided by QNAP TS-509 NAS (by means of NFS and iSCSI) This setup has been absolutely flawlessly so far but there is simply not enough RAM and CPU power for my needs (or rather my VMs). CPU Ready is going through the roof quite often due to AMD Athlon II Neo 1.30GHz processor which is just slightly better performer compared to Intel Atom range. 8GB of RAM is tight and ESXi was paging the VMs like mad to .vswp files hence why I put them on SSD which helped only to certain extent. At the end I kinda had enough and decided to build a custom server which would address all of the issues above.
Here is what I came up with:
Processor: Intel Xeon X3450 2.66GHz with HT/VT-x and VT-d
Processor Cooling: Corsair CWCH100 Hydro Series H100 Cooler
Processor Cooling Fans: 2 x Noctua NF-P12
Motherboard: Supermicro X8SIL-F-O Server Board
RAM: Kingston 4 x 8GB [KVR1333D3Q8R9S/8G]
Case: Lian-Li PC-V350B
Case Noise Dampening: AcoustiPack LITE (APL) Multi-Layered Soundproof Material
Case Backplate: Custom backplate to incorporate moving PSU to the right and adding 120mm exhaust fan
5.25″ Drive Bay Cooling: Evercool Armour ATX HDD Cool Box HD-AR
5.25″ Drive Bay Cooling Fan: Noctua NF-R8-1800
Case Exhaust Fan [Back]: 1 x Noctua NF-P12
Case Exhaust Fan Guard: 120mm Standard Wire Case Fan Guard Grill [Black]
Power Supply: Be Quiet! BN180 L8 430W Modular PSU
Storage 1: Samsung 830 256GB SSD (main datastore)
Storage 2: Seagate Barracuda 2TB [ST2000DM001] (second datastore)
Storage 3: Vertex 1 30GB SSD (.vswp datastore)
Storage 4: Patriot Extreme Performance Xporter XT Rage 8GB (local storage for ESXi)
Storage Adapter Bracket: SilverStone SST-FP55B (allows 1 x 5.25″ and 2 x 2.5″ in one 5.25″ slot!)
Network 1: Onboard Dual Intel 82574L Gigabit Ethernet Controllers
Network 2: HP NC360T PCI Express Dual Port Gigabit Server Adapter [which effectively is Intel PRO/1000 PT Dual Port NIC]
Network 3: Intel Ethernet Converged Network Adapter X520-DA2, 10GbE, Dual Port
RAID Controller: IBM ServeRAID M1015 [which kinda is OEM version of LSI 9220-8i]
Project Update #1
ODD: Toshiba/Samsung TS-H653 20x DVD±RW DL SATA Drive
I will be updating this post as work on the server progresses! Stay tuned.
Project Update #2
Project Update #3
Project Update #4
Project Update #5
Project Update #6
Project Update #7
So here I am, sitting at 10PM converting templates (.vmtx) to VMs (.vmx) – nothing simpler to do right? Wrong! It fails on “This Host or Cluster is not Valid Selection” as soon as you click next on cluster selection… Great, just great. Exactly what I wanted before heading off to bed.. Here is the error:
And the solution (as usual, very simple) is to manually remove any entries to .iso files referenced in .vmtx file. I had one to volume where the .iso used to sit that was no longer connected as storage to my ESXi host:
Now you can happily click your favorite ‘Next’ button just to see it completing without problems.
Life is awesome!
This week I took the delivery of 4 x Dell R720s at work for our Dev environment. Lovely kit this new 12G line of servers is, 96GB of RAM each, 10 NICs, dual SD cards with 2 x 2GB cards for local install of ESXi etc. Fast forward few hours later after all cabling up was done I pop the CD in to actually install ESXi 5.0 and guess what?
I’m like what’s not detected?!? When we did install last 11G servers (R810s only few months back) there was no messages like that at all! In each of the new servers there is a daughter card (NDC, 4 ports based on Broadcom BCM5720), PCI-E card (4 ports, Broadcom BCM5719) and another PCI-E card but this time with 2 ports based again on BCM5720 chip by Broadcom. Unless I’m completely unlucky there must be driver(s) included for at least one of them cards? It turns out there is not.