One of several servers ended up being an ESXi host having a connected HP StorageWorks MSA60.
We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. As soon as we go through the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.
We rebooted the server and attempted going in to the RAID config energy to see just what things seem like after that, but we received the following message:
“An invalid drive motion had been reported during POST. Customizations to your array setup after a drive that is invalid can lead to loss in old setup information and articles associated with the initial rational drives”.
Of course, we are really confused by this because absolutely absolutely nothing had been “moved”; absolutely nothing changed. We simply driven up the MSA as well as the server, and have now been having this presssing problem from the time.
I’ve two main questions/concerns:
Since we did absolutely nothing significantly more than energy the products down and straight back on, just what could’ve triggered this to take place? We needless to say have the choice to reconstruct the array and start over, but i am leery concerning the risk of this occurring once more (especially it) since I have no idea what caused.
Can there be a snowball’s opportunity in hell that I am able to recover our array and guest VMs, rather of getting to rebuild every thing and restore our VM backups?
I’ve two primary questions/concerns:
- Since we did absolutely nothing a lot more than energy the products down and right back on, exactly what could’ve triggered this to take place? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the possibility for this occurring once more (especially it) since I have no idea what caused.
A variety of things. Do you realy schedule reboots on all of your equipment? Or even you should just for this explanation. The main one host we now have, XS decided the array was not prepared over time and did not install the storage that is main on boot. Constantly good to understand these plain things in advance, right?
- Can there be a snowball’s possibility in hell that I am able to recover our guest and array VMs, rather of experiencing to reconstruct every thing and restore our VM backups?
Perhaps, but i have never ever seen that one mistake. We are chatting really restricted experience right here. Dependent on which RAID controller it’s attached to the MSA, you are in a position to browse the array information through the drive on Linux utilizing the md utilities, but at that point it’s faster merely to restore from backups.
A variety of things. Would you schedule reboots on all your valuable gear? Or even you should really for only this explanation. Usually the one host we now have, XS decided the array was not prepared with time and did not mount the storage that is main on boot. Constantly good to learn these things in advance, right?
We really rebooted this host times that are multiple a month ago whenever I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at across the exact same time because I added more RAM to it. Once more, after powering every thing straight right straight back on, the raid and server array information ended up being all intact.
A variety of things. Would you schedule reboots on all of your gear? or even you should really just for this reason. Usually the one host we now have, XS decided the array was not prepared over time and don’t install the storage that is main on boot. Constantly nice to learn these things in advance, right?
I really rebooted this server multiple times about a month ago once I installed updates onto it. The reboots went fine. We additionally completely driven that server down at across the exact same time because I added more RAM to it. Once again, after powering every thing right straight back on, the server and raid array information ended up being all intact.
Does your normal reboot schedule of one’s host add a reboot associated with MSA? Could it be which they were powered straight straight back on when you look at the order that is incorrect? MSAs are notoriously flaky, likely that’s where the presssing problem is.
We’d phone HPE help. The MSA is a flaky unit but HPE help is quite good.
We really rebooted this host times that are multiple a month ago whenever I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing straight straight back on, the raid and server array information had been all intact.
Does your normal reboot routine of one’s host add a reboot associated with the MSA? would it be which they had been driven straight straight back on when you look at the wrong order? MSAs are notoriously flaky, likely that’s where the problem is.
I would phone HPE help. The MSA is an unit that is flaky HPE help is very good.
We unfortuitously don’t possess a reboot that is”normal” for almost any of our servers :-/.
I am not certain exactly what the proper purchase is :-S. I would personally assume that the MSA would get driven on very first, then ESXi host. If this is proper, we’ve currently tried doing that since we first discovered this problem today, and also the problem continues to be 🙁 web link.
We do not have help agreement with this host or perhaps the connected MSA, and they are most most most likely way to avoid it of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), and so I’m uncertain simply how much we would need certainly to invest to get HP to “help” us :-S.
I really rebooted this host numerous times about a month ago once I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing straight straight back on, the raid and server array information ended up being all intact.