More SSLv3 (Poodle vulnerability) woes as this time NetApp VSC 5.0 is broken!
So my vCenter 5.5 update 2a got updated to update 3e without much of a problem but SRM and VSC are busted now. Great. Virtual Storage Console sort of works but the backup jobs tab hasn’t got any entries and you cannot re-create them due to the following errors:
Unable to connect to Virtual Storage Console server. Please make sure that the Virtual Storage Console server is running.
Cleaning up NetApp SnapManager for Virtual Infrastructure snapshots in VMware vSphere can be a pain if you have large number of VMs being backed up by SMVI. In my case there are snapshots that are consistently left behind like so:
which just pile up as the days go on. I think this is some sort of bug in either vSphere API or the way NetApp handles snapshotting during the backup window.
To have these snapshots cleared up after the backup jobs run I have written the following PowerShell script to deal with the situation:
Last week I have installed VMware vSphere 5.5 on my test host and today was the time to get the NetApp Virtual Storage Console 5.0 going so I could take advantage of Rapid Cloning and other good stuff that VSC 5.0 includes.
Installation was straight forward (recommended read – Virtual Storage Console 5.0 for VMware® vSphere® – Installation and Administration Guide) and next logical step was to add my Storage Systems so I could provision datastores etc. From within VSC section in vSphere Web Client I was trying to add new Storage System just to be presented with the following:
“Unable to add storage systems due to insufficient privileges. You do not have sufficient permission to perform this action on: the root object. Contact your administrator to add the following mission privileges: Add, Modify, and Skip storage systems”
Today meant to be just another ordinary day in the office and for most part this was the case. Around 1PM I have noticed that 2AM NetApp VSC backup job was still running… Bit odd I thought as this never happened before – its normally done in 30 minutes tops. vSphere client was showing 1 VM in recent tasks as being in progress. Hmm so what’s up with that VM then? It looked completely stuck, I couldn’t edit settings, power off, reset etc. Nothing worked. Tasks and events tab was explaining the situation a bit better:
So basically backup started and its stuck while taking snapshots due to being unable to quiesce the file system. Beyond this point vSphere client is pretty much useless so its was time to hit the command line via SSH to get me out of trouble. First you need to know the name of your stuck VM, it doesn’t have to be letter for letter as you can simply search for it using grep in a list of active processes on ESXi host. My VM had ‘STD’ in its name (aka Standard flavor of Windows Server 2012) and to find the actual PID number I’d to run the following command:
to kill the process (that run your VM) its simply kill command followed by PID number, in my case:
Now it should be gone. Quick check for PID number that we killed shows there is no such PID anymore – good.
At this point my VSC job simply timed out and moved on to backup other VMs in the datastore.