virBeaver is a personal blog about virtualization and software-defined datacenter (SDDC) which is maintained by me, Tim Maier. virBeaver is derived from the numerous nicknames around beavers, which I had because of my birthplace, Biberach an der Riss in Germany.
In one of their latest updates for macOS Catalina Apple has introduced some new requirements for the acceptance of SSL certificates. The changes are documented here: https://support.apple.com/en-us/HT210176. This means that pages without a corresponding certificate are no longer accessible in Google Chrome. Unfortunately, the default vCenter Server certificate is one of the affected certificates. Unlike other certificate warnings, this error cannot be easily bypassed using Advanced options.
In the following I would like to show you how you can temporarily work around this issue.
As every year, some of my customers use the weeks after Christmas to update their environments. Nearly all of them run their ESXi hosts with vendor-specific Custom Images that provide additional drivers or agents over the standard VMware image. Unfortunately there are almost always problems with conflicting VIBs when they are updated. Of course it was the same this time. In my case, this time it was about a custom image from Fujitsu.
To perform the update successfully the problematic VIB must be removed. The necessary steps for this I would like to point out below.
Today I learned something very useful that I want to share with you. We have some smaller customers from the SMB segment. Here we often find small vSphere clusters with two or three ESXi hosts. These are usually licensed with vSphere Essentials Plus to use the benefits of vSphere High Availability (vSphere HA). The functions contained in this bundle are quite sufficient for regular operation. One function that is missing in vSphere Essentials Plus is the ability to move running VMs from one Datastore to another using Storage vMotion. In this post I want to share with you a way on how you can still move your running VMs between datastores.
As a solution provider, my company often installs VMware vSphere environments for customers who do not administer the solution themselves. In addition to restricting user permissions, we have been working with individual login messages in the vCenter Server for some time now. We usually use the login message to remind the customer that the environment is managed by our company and that any changes must be approved in advance.
This post gives you a short overview on how to configure the login message using the vCenter Server administration interface.
After the gerneral availability of ESXi 6.7 U3, I decided it is time to update my homelab. I used the integrated vSphere Update Manager (VUM) for my nested vSAN Cluster without any issues. During my VCAP-DVC Deploy preparation earlier this year I used an old Intel NUC system to deploy an additional vCenter Server Appliance (VCSA) in order to play around with Enhanced Linked Mode (ELM). I wanted to have the VCSAs in different Layer 3 networks, so I copied the design from the nested cluster and installed a nested router alongside with a nested ESXi host. The nested ESXi is used to host the second VCSA. Due to this design I can not use the integrated VUM of the second VCSA to update this standalone ESXi host.
The virtual machines (VMs) on the standalone ESXi host are not very important, so I decided to shut them down together with the VCSA and update the nested ESXi host via CLI. For details on how to update ESXi 6.x using the CLI, take a look at one of my previous blog posts here. I used this method quite some time so I was certain to have no issues with the update. But this time it was different, the update failed.
In the last weeks I accompanied a customer during the preparation of his storage migration. Among other things, we came up with the usual questions on this subject:
On which datastores are my virtual machines located?
Are there virtual machines with snapshots?
Are CD/DVD drives still connected to certain VMs?
That’s why I played around with PowerCLI and wrote some scripts to get answers to these questions. I leave the setup of the vCenter connection and the parameterization of individual commands to each of you and concentrate here on the central command(s).
List datastores of certain VMs
This small command returns a list of VMs sorted by the name of the VM in a specified cluster, including their datastore and the VM folder.
The following is an example of the output of the command.
PowerCLI makes it really easy for an administrator to find out what dependencies exist prior to storage migration. With the commands shown above, you can quickly and easily get an overview of the current state. If you wish, you can also extend or rewrite the commands accordingly and, for example, completely automatically remove all attached ISO images from the VMs.
After attending VMware Empower 2019 Europe and receiving an exam voucher for a VCAP Deploy exam of my choice I decided to sit the VMwareCertified Advanced Professional 2018 – Data Center Virtualization Deployment (3V0-21.18) exam two weeks ago. Due to the expiry date of the voucher I was forced to take the exam prior to the 15th june which meant that there wasn’t much preparation time after returning from Empower.
Because I went down the VCAP-DCV Design track last year, I was already aware that the Deloy exam is a practical exam based on VMware Hands-on Labs (HOL) and and not comparable with the usual multiple-choice exams. The following steps should give you a brief overview of my preparation and my experience with the exam.
Step 1: Read the blueprint
The blueprint or Exam Preparation Guide as it is called now is the single source of truth regarding the scope of the exam. After you read it, read it again and make sure you are well aware of the exam sections outlined in the guide. As described in the paragraph about the Minimally Qualified Candidate (MQC) the exam requirers practical knowledge about complex virtualized environments across multiple locations and technologies. So be prepared to work with such an environment and the corresponding dependencies and constraints. Another very important fact is the section Product Affiliation. The exam is associated with VMware vSphere 6.5. Some features or options you might know from vSphere 6.7 might not be available or behave differently in the exam.
Step 2: Familiarize yourself with the HOL GUI and operation
Because the exam is based on the HOLs it is a very good idea to have a look at the GUI and operation of the HOLs prior to the exam. You do not want it to be the first time you see the GUI and limitations when sitting in the exam room. The screen size is limited, some keyboard shortcuts might not work and quite a bit more. Due to all these restrictions, VMware has published an own Exam UI Guide. You’ll also find a good video on this topic on YouTube.
Step 3: Know your stuff and the documentation
Sounds like a phrase, but it’s true. You don’t have to know everything 100%, but knowing which functions and options are hidden in which menu is essential for this exam. There are plenty of good blog entries with exam preparations and test exams that you can find with your favorite search engine. Regardless of which one you use, I recommend that you first work out your own solution before you look at the approach or solution in the guide.
During the exam you will not have access to the internet, not even to the VMware Knowledge Base. Nevertheless, you will find a very good selection of VMware documentation as PDFs on the desktop of your test system. Get to know the different documentations and their contents in advance and use them like you would normally use a search engine. Please don’t waste any time and read the complete documentation during the exam.
Step 4: Keep calm and manage your time
You are given 205 minutes (+30 minutes for non-native English speakers) for 17 exam questions. That’s about 12 minutes (14 minutes for non-native speakers) per question. That sounds very generous at first, but you’ll need the time. Some of the questions consist of multiple parts, others may require vSphere actions that take longer than others. Don’t waste time on questions that you already know when you read them that you don’t have a starting point or solution right away. Use the washable paper you get to write down such questions and save them for later. A good tip I read was to write the numbers from 1 to 17 on the left side of the paper below each other. Behind it a short keyword so that you still know after 3 hours what the question was about without having to browse through the manual. I also used this technique to check off questions I had already worked on and keep track of them during the exam.
Step 5: Patience until the evaluation of the exam
Because these are not the usual multiple-choice questions but a practice-oriented exam, you will not get your result directly after the exam. It took a week to get my badges from Acclaim. I haven’t received an e-mail or similar with my certification results yet.
It felt like it was the most difficult exam I’ve ever had in my career. Regardless of the challenge, it was also by far the exam I enjoyed the most. I really liked solving problems and performing tasks in a „real“ environment and not just answering theoretical questions about a particular product. Someone on Twitter found a great description for the exam: „Like a day with a new customer. Here is our environment and now you have four hours to solve the problems.“ At the end of the VCAP-DCV Deploy adventure, however, the really nice VCIX badge awaits – assuming the successful VCAP-DCV design certification.
This week I reworked the backup of a customer. The customer is using Veeam Backup & Replication to back up their VMware vSphere infrastructure. For whatever reason (the customer couldn’t give me the exact reasons), a single backup job was created for each VM.
Apart from the poor deduplication rate, these over 100 backup jobs were anything but easy to manage and monitor. We looked at different ways to group the backups and after a short time on the whiteboard we came to a first solution: VMware vSphere Tags. These tags exist since vSphere 5.1 and are the successors of the Custom Attributes. Within Veeam Backup & Replication, they can be used as selection criteria for the VMs to be backed up in a backup job.
First we had to create the appropriate category (Backup) and tags within vSphere. Then we were able to use the newly created tags as selection criteria within the backup job.
The customer operates some hosted VMs for external customers. The VMs to be backed up are in different domains with different credentials. For this reason, we faced the challenge of having to use different credentials for different VMs within a single job. This is generally no problem for Veeam Backup & Replication. But when working with container objects like tags (but also clusters, folders, datastores, etc.), the functionality is a bit tricky to find. If we look at the settings for Credentials under Guest Processing, we first see only the container object for which we can assign certain credentials.
But if we click the Add… button and switch to VMs and Tags again, we have the possibility to expand our selected tag and see all VMs to which our tag is currently assigned.
If we now select all VMs for which we would like to assign individual credentials and press Add, we will be able to store the correct credentials in the next step.
Even if it is not immediately recognizable at first glance, the assignment of individual credentials works not only with the dedicated selection of individual VMs, but also with the use of container objects such as tags. This enables a highly automated implementation of backup jobs even in complex environments with different credentials. New VMs only need to be given the appropriate tag and the backup administrator must receive the appropriate credentials if more than just backing up the entire VM is required.
Today I was asked by one of my customers to configure vCenter High Availability (VCHA) in his productive vSphere 6.7 U1 environment. Now that it’s been some time since I last configured VCHA, I first tested the process in my homelab. The following procedure shows how to configure and activate this feature.
Step 1: Set up the VCHA network
Before the VCHA cluster can be set up two things have to be prepared: A dedicated network for VCHA and IP addresses for the three vCenter nodes (Active, Passive, Witness). Both requirements are displayed to the administrator directly before the configuration of VCHA in the vSphere Client.
Since my homelab is a nested environment, the corresponding network can be set up without difficulty. In a regular customer situation where the ESXi hosts are connected via physical switches, the corresponding VLAN must be created and available on the switch ports before communication can function via the new VCHA network.
Step 2: Choose the VCHA network and deployment type
After clicking on SET UP VCENTER HA you can select the VCHA network for the existing, active vCenter Server Appliance (VCSA) and the deployment type. Because my customer is running only a single site with a single SSO domain I can leave the checkbox Automatically create clones for Passive and Witness nodes checked and let vSphere do the work for me.
Now I only have to select the Location (compute), Networks and Storage resources for the other two nodes (Passive and Witness) to continue with the second configuration step.
Step 3: Configure IP addresses for VCHA
After configuring the basic VM settings, only IP addresses in the VCHA network for the three nodes (Active, Passive and Witness) have to be assigned. With some support from the customer this should not be a problem either. Since the VCHA network is a private network that does not need to be routed, I do not configure the default gateway.
Step 4: Wait for completion
After a click on Finish our work is done and the next steps are performed by vSphere alone. First we create a clone for the Passive Node and configure it according to our specifications. Then the same is done again for the Witness node.
At the end we have two new vCenter VMs (Passive and Witness) in the environment.
In my homelab it took about 15 minutes until the VMs were rolled out and configured.
Also, a DRS rule that distributes the vCenter nodes across different hosts in my environment was automatically created and configured.
Since the introduction of vCenter High Availability with vSphere 6.5, the process of rolling out this great feature has become easier and more intuitive. In a relatively simple environment, the high level of automation makes it a matter of minutes to get VCHA up and running. And for more complex environments, manual cloning of the VCSA still remains.
During my work with VMware products I always needed a platform to test new products, updated versions and software dependencies. In the past I used the virtual environment of my employer to deploy, install and configure the necessary systems. This always had a large downside: Me and my colleagues used the same environment to perform our tests so we had some conflicting systems now and then. For the past couple of months, I have been experimenting with different lab setups, but nothing really satisfied my needs. So I decided to approach my homelab design the way I would approach any other client design: collecting requirements. (Even if it may not correspond to the level of detail of a real customer design.)
Here is the list of the central requirements for my homelab:
R001: Small physical footprint.
R002: Low energy consumption.
R003: Low noise level.
R004: Powerful enough to handle different use cases at the same time:
VMware vSAN (including All-Flash, multiple disk groups and automatic component rebuilds)
VMware NSX (including NSX DLR and NSX Edges)
VMware Horizon (including virtual desktops)
R005: Portable solution for customer demonstrations.
R006: Isolated environment without dependency or negative impact to other infrastructure components.
R007: IPMI support to power it on and off remotely.
To have the automatic rebuild of components (see R004) with vSAN even in case of a host failure, at least four nodes are required. Regarding R001, R002 and R003, a full sized 19-inch solution with four physical servers is not an option. Regarding the 10Gbit/s requirement for vSAN All-Flash (see R004), a solution consisting of several mini PCs is out of the question. Also, such a solution is not ideal for regular assembly and disassembly (see R005) due to the different components. To avoid dependencies on infrastructure components (see R006) despite the different use cases (see R004), a completely nested environment including AD, DNS and DHCP behind a virtual router seems to be the best option to me.
The network design of the physical ESXi host is as simple as possible and as complex as necessary. I configured two vSphere Standard Switches (vSS). The first one (vSwitch0) contains the VMkernel NIC enabled for management traffic and Standard port group for the VMs which need a connection to the Home network. vSwitch1 handles all the nested virtualization magic.
To do so I configured this vSS to accept promiscuous mode, MAC address changes and forged transmits. I also configured it for jumbo frames (MTU 9000) so I can use it for the VXLAN overlay networks using VMware NSX-V. All VMs which form the nested environment attach to vSwitch1.
Six VMs are deployed natively on the physical ESXi host. The first VM is a virtual router which acts as gateway to the outside world for the nested systems. It is configured with two network interfaces. One facing the Home network while the other connects to the nested environment. The second VM is a domain controller running AD, DNS, DHCP and ADCS. The remaining four VMs are the nested ESXi hosts which later form the vSphere cluster for all other workloads. The nested router and the nested domain controller can be reverenced as „physical“ workloads running outside of the virtualization.
I went for a single Supermicro SuperServer E300-8D as a starting point. This not so small beauty convinced me by the good initial values but also by the numerous Expansion possibilities. I replaced the two out of stock fans with Noctua ones and also added a third one for better overall cooling.
I was able to reuse most of the components from my homelab experiments and a nice system with 128GB RAM, a 64GB SATADOM device, a 1TB mSATA SSD and a 2TB NVMe SSD came together.
For all who are interested, here is the complete bill of materials of my current homelab configuration.
Supermicro SuperServer E300-8D
Noctua premium fan 40x40mm
Kingston ValueRAM 32GB
Supermicro SATADOM 64GB
Samsung 850 EVO mSATA 1TB
Samsung 970 EVO NVMe 2TB
Samsung PM983 NVMe 3.84TB
With this space and power saving configuration I have enough resources to run a nested 4 node vSAN cluster that uses NSX-V to run VMware Horizon on it. In one of my next posts, I’ll go deeper into the configuration of nested ESXi hosts and the associated implementation of VMware vSAN.