vSphere 7.0 – VM fails to boot from iso

In the last two weeks I refreshed my vSphere and vSAN lab environment, which included upgrading to vSphere 7.0. I decided to do a fresh install of both vCenter and my ESXi hosts which went pretty smooth.

After upgrading I decided to also create a fresh Windows and Ubuntu desktop as part of my Horizon lab (upgraded to 7.12 in the meantime as this was the minimum release supported with vCenter 7.0).

Normally installing a clean Windows or Ubuntu desktop is not a problem (kind of next/next/finish) but this time I just didn’t manage to get the VM’s boot from .iso. I made sure the virtual DVD device was but still once the VM was booting it tried to boot from the network and the DVD device was showing as disconnected.
Since I had seen some problem already with replicating VMs with the latest virtual hardware (version 17) I initially retried the installation with a new VM with 6.7 compatibility (vHW 14) as this has been successful in the previous environment without any problems and with the .iso file located on the same NFS datastore. This also failed, so the problem was not related to the virtual hardware.

Then I checked the vmware.log file of the VM that failed to boot from .iso and I noticed the following :

2020-05-25T08:04:58.619Z| vcpu-0| I125: CDROM: Connecting sata0:0 to ‘/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso’. type=2 remote=0
2020-05-25T08:04:58.620Z| vcpu-0| I125: FILE:open error on /vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso: Read-only file system
2020-05-25T08:04:58.620Z| vcpu-0| I125: AIOGNRC: Failed to open ‘/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso’ : Read-only file system (1e0002) (0x21).
2020-05-25T08:04:58.620Z| vcpu-0| I125: CDROM-IMG: image open for ‘/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso’ failed: Read-only file system (1e0002).
2020-05-25T08:04:58.620Z| vcpu-0| I125: CDROM-IMG: Failed to connect ‘/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso’.
2020-05-25T08:04:58.620Z| vcpu-0| I125: CDROM: Failed to connect CDROM device ‘/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso’.
2020-05-25T08:04:58.620Z| vcpu-0| I125: Msg_Post: Warning
2020-05-25T08:04:58.620Z| vcpu-0| I125: [msg.cdromImage.cantOpen] Cannot connect file “/vmfs/volumes/b8b05642-89df68b7/ubuntu-18.04.3-desktop-amd64.iso” as a CD-ROM image: Read-only file system
2020-05-25T08:04:58.620Z| vcpu-0| I125: [msg.device.startdisconnected] Virtual device ‘sata0:0’ will start disconnected.

The fileshare provided to ESXi as an NFS datastore was presented with Read Only permissions (and always had been), but seeing these messages in the log made me change the permission to Read/Write and voila …. I was able to succesfully boot the VM’s from the .iso file on the NFS datastore and continue my guest OS installation.

So apparently vSphere 7.0 requires R/W access for booting from an .iso file.

VMware Re-certification requirements have changed (for the better …)

As of February 4th the requirement to re-certify every 2 years when you hold an active VCP certification no longer exists (unless you are a VMware partner or are in some other program that requires you to have a more current certification).

This means you will now have more time to study towards a more recent certification, plus the upgrade path is shorter in many cases by simply taking the latest version of a specific VCP exam.
This shorter path can be taken as long as you are no more than 3 versions behind the most current VCP version for any particular track.

This announcement also means that some VCP certifications that were previously de-activated now have become active again !

For all details (like understanding that this is only for the VCP level certification, that there are different requirements when you upgrade to a different track, etc.) please see the official VMware blog post on this announcement.

Happy re-certifying !

2019 VCP Certifications

As mentioned in an earlier blog post naming of the VCP certifications is changing to reflect the year in which the certification was achieved rather than the version of the product that the certification applies to.

As of this week (January 16th to be precise) the new VCP certification naming is effective. Currently the following certifications are available :

For more details please read the VMware education blog around this topic.

VMware Certification Naming Changes

Last week VMware Education announced a change in naming the various certifications, where the year in which the certification is achieved is reflected in the name of the certification.

Until now the name of the certification reflected the version of the product that it was related to (for example VCP6-DCV referred to the vSphere 6.0 release). This may cause confusion about the currency of a specific certification, since the pace where product releases are made available is not very strict, which is also reflected in the certification (-exams). For example my VCP4-DCV certification was 15 months older than my VCP5-DCV certification, but the latter was over 3 years older than my VCP6-DCV certification.

Also both my “DCV” and “DTM” certifications are valid but one is called VCP6 and the other is called VCP7 (as they relate to vSphere 6.0 and Horizon 7.0 respectively).

So changing the name to reflect the year where the certification was achieved does make sense and will result in certifications like VCP-DTM 2019 and VCAP-DCV Deploy 2020.

It is important to understand that the change is only with regard to the naming of the certification. This means that there are no changes in requirements to achieve a certification or for re-certification (so a certification is still valid for 2 years and can be renewed by taking a newer exam in the same track or taking an exam in a different track). Also the name of the certification exam wil still reflect the product version that the exam questions are based on.

More detailed information about this announcement can be found in the FAQ document on the VMware certification website.

vSAN 6.7 Encryption

In vSphere 6.5 VMware introduced the possibility to encrypt Virtual Machine data on a per VM basis. This is achieved by using VAIO filtering and a specific policy is used to indicate whether a VM needs to be encrypted or not.

With vSAN 6.6 another way of encryption was introduced which means that the entire vSAN datastore is encrypted and as a result every VM that is stored on the vSAN datastore gets encrypted (and hence no specific policy is required).

For both encryption methodologies a KMS server (or cluster of KMS servers for production environments) that supports the KMIP protocol needs to be installed and configured in vCenter. Although both vSphere and vSAN encryption can use the same configured KMS server/cluster there is a small but important difference in the way the keys that are required for encrypting the data are communicated to the ESXi hosts.

In the case of vSphere (VM) encryption, ESXi needs to be able to communicate to vCenter to get the specific Key Encryption Key (KEK) for a VM when this VM needs to start (or is created). So when vCenter is not available, such actions possibly cannot be initiated.

For vSAN encryption however, an ESXi host only needs to communicate with vCenter when vSAN encryption is enabled. At that moment the KEK ID’s required to store the Data Encryption Keys (DEK) that are used to encrypt the disks are sent from vCenter to the ESXi hosts. Using these KEK ID’s the host will communicate directly with the KMS server to get the actual KEK.

To show this mechanism I have created a little demo video. For my own educational purpose I have used the vSphere (and vSAN) 6.7 version which allows me to use the new vSphere (HTML5) client functionality.

Read morevSAN 6.7 Encryption

Upgrading my vSAN Cluster

Some time ago I decided to upgrade my home lab environment running vSphere (from 6.0 U3 to 6.5 U1) and vSAN (from 6.2 to 6.6.1).

I started with upgrading the vCenter appliance which is quite a smooth upgrade process. The only problem I had is that initially the upgrade wizard did not give me a choice to select “Tiny” as the size for the new appliance. This appeared to be an issue with the disk usage of the existing appliance. After deleting a bunch of old log files and dump files from the old vCenter appliance I retried the upgrade wizard and this time the “Tiny” option was available – which is a better fit for my “tiny” lab 🙂 – and the upgrade process went just fine.

Next up was the ESXi upgrade (I have three hosts). First try was doing an in-place upgrade using Update Manager.

Read moreUpgrading my vSAN Cluster

VMware vSAN Specialist exam experience

Recently VMware Education announced the availability of the “vSAN Specialist” exam which entitles those who pass it to receive the “vSAN Specialist 2017” badge. The badge holder is a “technical professional who understands the vSAN 6.6 architecture and its complete feature set, knows how to conduct a vSAN design and deployment exercise, can implement a live vSAN hyper-converged infrastructure environment based on certified hardware/software components and best practices, and can administer/operate a vSAN cluster properly“.

As I consider myself to be a vSAN specialist I thought this one should be rather easy to achieve, so after I read about it last week, I immediately scheduled my exam at Pearson VUE and took it today.

Read moreVMware vSAN Specialist exam experience

VMware VVOL’s with Nimble Storage

VMware Virtual Volumes (aka VVOL) was introduced in vSphere 6.0 to allow vSphere administrators to be able to manage external storage resources (and especially the storage requirements for individual VM’s) through a policy-based mechanism (called Storage Policy Based Management – or SPBM).

VVOL in itself is not a product, but more of a framework that VMware has defined where each storage vendor can use this framework to enable SPBM for vSphere administrators by implementing the underlying components like VASA providers, Containers with its Storage Capabilities and Protocol Endpoint in their own way (a good background on VVOLs can be found in this KB article). This makes it easy for each storage vendor to get started with introducing VVOL support, but also means that it is not easy comparing different vendors with regard to this feature (“YES we support VVOL’s …” does not really say much about the way  an individual vendor has implemented this feature in their storage array and how they compare to other vendors).

In this blog I want to show the way Nimble Storage (now part of HPE) has implemented VVOL support. For now I will focus on the initial integration part. In a future blog I will show how this integration can be used to address the Nimble Storage capabilities for individual VM’s through the use of storage policies.

Read moreVMware VVOL’s with Nimble Storage

Creating a new vSAN 6.6 cluster

Last month VMware released vSAN version 6.6 as a patch release of vSphere (6.5.0d). New features included Data-at-Rest encryption,  enhanced stretched clusters with local protection, change of vSAN communication from multicast to unicast and many more.
Perhaps al ittle less impressive but yet very useful change is the (simple) way a new vSAN cluster is configured. To illustrate this I have recorded a short demo of the configuration of a new vSAN 6.6 cluster.

Read moreCreating a new vSAN 6.6 cluster

Deleting a vSAN datastore

I am a big vSAN fan and use it in my own Home Lab for most of my VM’s (main exception being  VM’s used for backing up … they are on my QNAP fileserver connected via iSCSI). My vSAN cluster configuration is quite static and the only thing that might change in the near future is increasing the capacity by adding an additional ESXi host to the cluster.

Currently I am running with vSAN version 6.2 and since the environment is very stable and it is my “production” environment I don’t plan to upgrade to the latest and greatest version yet. Still, I do want to work with the newer versions and functions (like iSCSI target) to become familiar with them and stay up-to-date with my vSAN knowledge, so I have a test (virtual) vSphere 6.5 Cluster with vSAN 6.5 installed, currently in a 2-node (ROBO) setup with an additional witness appliance.

With the release of vSAN 6.6 (check out the release notes here) I wanted to upgrade my vSAN 6.5 environment. Actually I decided to create a new vSAN 6.6 cluster from scratch with my existing ESXi hosts, which means I first had to delete my existing vSAN 6.5 datastore.

Read moreDeleting a vSAN datastore