So here we are, lovely Thursday morning at work and requirement for new VM comes up – I’m thinking not a big deal since I have deployed thousands of VMs before but there is a catch this time (there always is!) All of my Windows Server templates are virtual machine HW version 8 and I need to deploy one server to ESXi 4.1 host – great! ESXi 4.1 uses HW version 7 at the most so HW version 8 will not work – if you attempt to add HW version 8 to the inventory on ESXi 4.1 host you will be met by the following outcome:
VM adds fine and without any errors but its grayed out and with invalid status. Not much you can do here apart from removing it from the inventory.
Closer look at what’s happening (or not happening as a matter of fact):
“Perf Charts service experienced an internal error. Message: Report application initialization is not completed successfully. Retry in 60 seconds.”
Now, this error has been around for as long as I can remember. There are many causes of it but I will try to cover the one I have experienced (and solved)
Let’s get to it.
In vCenter 4.x this has never been an issue and charts stopped working since I have upgraded my vCenter to version 5.0 Update 2. Generally you look at log files for vCenter (stats.log is what we’re after) to determine the root cause. Location of stats.log depends on version of Windows and its as follows:
Interesting question and even more interesting is why VMware would use such an archaic version of mpt2sas driver in their fairly recent builds of ESXi. Quick background on why I’m writing about this.
I bought my IBM M1015 RAID controller from eBay for about £65 and since M1015 is not supported by ESXi natively cross-flashing was the only way to get it working without too much of a hassle (if you could call cross-flashing RAID controllers not too much hassle!) I went for IT mode as opposed to IR for simplicity and ease of adding drives without mocking about with virtual disks etc. I will write a separate post about how to cross-flash to IT/IR mode later on this week (if time permits)
Going back to my issue, here is what my IBM card looks like right now cross-flashed to LSI 9211-8i in IT mode:
As you can see it’s running the latest available firmware (P15) and its in IT mode meaning its simply doing straight pass-through for any connected hard drives. Once we’re booted to ESXi we can quickly list all HBAs and the driver names by issuing this command:
Here is rather not interestingly looking error message popping up when you don’t have Syslog configured properly on ESXi 5.x. I have seen few variations of this error but only have one screenshot at hand!
In a nutshell the cryptic message says that you have logs configured on non-persistent storage and they’ll not survive a reboot of the host. If we look closely at the exact location they’re indeed configured to point at ESXi scratch partition i.e.  /scratch/log:
There are at least three ways to get us out of trouble in this situation:
Use 3rd party Syslog server,
Use Syslog server that’s bundled with vCenter 5,
Use persistent storage to store your logs.
Error message in title comes up when you create a new host profile and want to check compliance against cluster of hosts including the host profile was based on. Kinda strange as you would expect the source host to be compliant with its own profile! It turns out its not. Here is how the error looks like:
To get us out of trouble here, right click on the host and choose “Update Answer File…” fill in what’s missing (domain creds) and click update to complete the task. Domain credentials are normally not stored by default and are required for compliance to work (that’s only if you joined your hosts to the domain!)
Here is one hell of annoying error message (to look at, if you have OCD that is!)
Symptoms – yellow exclamation mark on the host icon and error in “Summary” tab. This only happens after you enabled SSH on ESXi host (which you want to).
Quick screenshots from vShpere client showing the ugly:
To get rid of the above you can proceed in two ways:
This week I took the delivery of 4 x Dell R720s at work for our Dev environment. Lovely kit this new 12G line of servers is, 96GB of RAM each, 10 NICs, dual SD cards with 2 x 2GB cards for local install of ESXi etc. Fast forward few hours later after all cabling up was done I pop the CD in to actually install ESXi 5.0 and guess what?
I’m like what’s not detected?!? When we did install last 11G servers (R810s only few months back) there was no messages like that at all! In each of the new servers there is a daughter card (NDC, 4 ports based on Broadcom BCM5720), PCI-E card (4 ports, Broadcom BCM5719) and another PCI-E card but this time with 2 ports based again on BCM5720 chip by Broadcom. Unless I’m completely unlucky there must be driver(s) included for at least one of them cards? It turns out there is not.