Category: VMware

Home / Category: VMware

Hi everyone,

For those who follow me and sometimes chat with me, you probably know that I am a big fan of VMware TestDrive. I don’t hear a lot about it, but I think it’s something great and very useful.

What is VMware TestDrive exactly? To quote them: “With ready-made integrations with some of our most popular ecosystem partners, TestDrive allows you to explore VMware solutions alongside applications and services modeled on typical customers use cases, in a real-world environment.”

What this boils down to, is that you get access to Ready to Use Experiences and some sandboxes, where you have (mostly) full admin access yourself to play around with. Let’s take an example of each:

  • One of the Ready to use Experiences is a VMware SD-WAN by VeloCloud desktop that you can log into using Horizon. It has a big lab guide that guides you through everything, and it is lengthy at about 8 hours at the time of writing.
  • You get Workspace ONE sandboxes at the time of writing. This is your own sandbox where you have full administrator access so you can test Workspace ONE. You can even create policies, enroll users/devices and much more.

At the time of writing, there are three categories. Digital Workspace, Accelerate Cloud Journey and Intrinsic Security.

Digital Workspace:

  • Ready to Use Experiences:
    • Workspace ONE
    • Workspace ONE UEM
    • Horizon Cloud
    • Horizon
    • App Volumes
    • Dynamic Environment Manager
  • Sandbox Experiences:
    • Workspace ONE UEM
    • Workspace ONE Access
    • Workspace ONE Express

There are also Sample Integrations:

  • Dropbox
  • Office 365
  • Zoom

Accelerate Cloud Journey:

  • Ready to Use Experiences:
    • vSAN
    • PKS (now known at TKGI or Tanzu Kubernetes Grid Integrated)

Intrinsic Security:

  • Ready to Use Experiences:
    • VMware SD-WAN by VeloCloud
    • VMware AppDefense and Carbon Black

Now you might ask yourself, what demos and such are there with this?
Well, you can view the guides over at this link, no account required. You just won’t get access to the desktops without access to VMware TestDrive.

TestDrive makes it very easy to demo things. However even more important (to me at least) it gives you extra hands-on experience with lab guides in real environments (the latter is where applicable), for example for the vSAN and AppDefense demos.

Let me know if you want a video walkthrough of one or multiple KB posts of VMware TestDrive, I’d be more than happy to make them. You can find my contact info here.

Of course, EVALExperience is also great, but I do think that TestDrive deserves more credit than it actually gets by the blogging community.

Thank you for reading this and see you in the next post!


In this post, we’ll be talking about setting up a Site Pair and interconnect with HCX, between your on-prem datacenter and your VMware Cloud on AWS SDDC.

Video version:

Written version:

In this post, we are going to create a Site Pair in HCX and an interconnect. Like the name suggests, a site pair pairs two HCX sites together, so you can create an interconnect between them. Interconnects are used for network stretching among other things.

First, we go to our on-prem vCenter. Under the HCX plugin, we go to site pairing. We fill in the FQDN of the HCX server, along with the [email protected] account and password. It will look like this once it is done:

Once this is done, we can create an interconnect. Click on Interconnect and then Compute Profiles. Create a compute profile.
Give it a name, select a cluster(s) where the HCX appliances should be located, then selected a datastore, folder and CPU/memory reservation settings.
Select a Management Network Profile, Uplink Network Profile, vMotion Network Profile, vSphere Replication Network Profile and a distrbuted switch to be used for network extensions.
Once it is done, you can view the required connections that should be allowed in your firewall.Once you are done, it will look something like this:

Now we can create a service mesh. Select a Source Compute Profile, which is the network profile you just created. Then select a Remote Network Profile, which is called ComputeProfile.
Select the Source Site Uplink Network Profile(s), which is the one(s) you created, and the Destination Site Uplink Network Profile(s), which is called externalNetwork. Select the appliance count (I leave it as 1) and optionally you can set a bandwidth limit.
Give the interconnect a friendly name and once it is done and everything is green (this takes a while), it will look this like:

And that’s it, we’re done! Now we are ready to use the functions of HCX, which I will cover in a next blog post.

Stay safe and I hope that you learned something. Feel free to contact me with any questions.


This blog post serves more as a collection of links to other blogs about new announcements that VMware has made during VMworld.

1. The new vRealize Cloud Universal offering.

With this new hybrid subscription offering, VMware is providing customers the flexibility to consume both on-premise and SaaS vRealize Suite products and services using a single subscription license. This offering allows customers the freedom to move workloads between on-premise and SaaS offerings interchangeably without the requirement to purchase new licenses. Additionally, this new offering provides access to Cloud Federated Analytics and Cloud Federated Catalog capabilities.

You can read more about it here.

2. Announcing vRealize AI. (formerly Project Magna)

vRealize AI will focus on vSAN in it’s first release, but VMware’s vision is to bring this to the rest of the infrastructure as well covering the full SDDC.

vRealize AI will be a part of vRealize Operations Cloud, at least in its first release, and only as a SaaS offering. The latter kind of makes sense as there’s quite some machine learning happening under the covers which needs to be powered.

You can read more about it here (scroll down a bit).

3. Nutshell: vRealize Operations Cloud 8.2, Automation Cloud 8.2 and Log Insight Cloud 8.2


4. Announcing vRealize Network Insight 6.0

Running a large network is always a challenge especially compounded with numerous application and user requirements. With the latest 6.0 release, VMware vRealize Network Insight will continue to deliver end-to-end network, application, and security visibility converged across virtual and physical networks. vRealize Network Insight will continue to improve the integrations with VMware NSX, VMware SD-WAN, VMware Cloud on AWS, Microsoft Azure, Amazon AWS, and Kubernetes environments. The new vRealize Network Insight 6.0 release will help enable a transition from a reactive to a proactive understanding of the end-to-end network so managing the day to day is easier with more accuracy and will allow more time for strategic initiatives.

I highly recommend reading this post from VMware, where they go into detail on what’s new.

Are you a blog poster and have I missed your post? Feel free to contact me and I will add it here.

Thank you for reading, have a great day and a great VMworld!

Hi everyone,

In this blog post, we will be going over how to deploy HCX on both your VMware on AWS SDDC and also on-prem.

So what exactly is HCX? One of the main features of HCX is stretching networks between on-prem and your VMC on AWS SDDC. (It does not have to specifically be to/from a VMC SDDC). Also you can migrate VMs, as a vMotion (live) migration, or as a bulk migration.

For the visual people, here is a video:

For the people who prefer to read, here’s a written version of the video.

In order to use HCX, we first need to enable it. Go to your SDDC console. Click on add ons, and then activate HCX. This will take a while, so sit back while it activates.
Once it has activated, click on “Open HCX”.

Once you are in the HCX console, click on Activate HCX at the SDDC you wish to activate HCX for. This will take a while. HCX will be deployed for you.

Once HCX is activated, you will need to create some firewall rules. Go to your SDDC console and under Networking & Security go to Gateway Firewall. Under the Management Gateway, create a new rule. The source is the user-group of your on-prem network and the destination is the HCX system group. Publish this rule and you should now be able to access the HCX VM. Back to the HCX console, click on Open HCX and login with the [email protected] account (or any other admin account).

Under Administration, click on System Updates. Then click on Request Download link.

This will generate a download link for you that you can use to download the HCX Connector. You can either download it and upload the OVA, or copy the URL and paste it into vCenter.

Which brings us to our local vCenter. Deploy a OVA template in your cluster (or on a single host) and go through the process. Fill in the information like you always do. It will ask for things like a password, FQDN, IP, gateway, and the usual questions. Let it deploy, depending on your configuration this may take a little bit. Don’t forget to power it on after deployment and let it start up.

Once it has booted up, open up your web browser and visit:
Then login with the username admin and the password you set during OVA deployment.

Now we need to fill in our HCX License Key.

Go back to the VMC HCX Console and click on Activation Keys. Click on Create Activation Key. Then select your VMC on AWS subscription and then HCX Connector under System Type. Copy the generated key and paste that in the HCX License Key field on the HCX Connector, then click on Activate.

Fill in the location of your on-prem datacenter, then on the next screen fill in a system name. Click on Continue and now you will be given the option of entering your on-prem vCenter and optionally on-prem NSX.

For the identity source, fill in your vCenter’s FQDN if you have the embedded PSC deployment. (Which you should, and if not then migrate to it, since the external PSC is deprecated with vCenter 6.7 and higher. With vCenter 7, it’s not even an option anymore during deployment.)

Next you can click on next, and then click on Restart. This will restart the HCX Connector service and you are up and running after this.

In the next video and blog post, we will be doing a Site Pair and create an Interconnect.

If you have any questions, feel free to email me or tweet at me.

Have a great day and I hope to see you in the next one.

Hi readers,

In this post, we will be taking our previously deployed TKG workload cluster and make it accessible through a public IP using NAT.

First, we deploy an application. I’ll use yelb. See the previous post on how that works.

Here we see that our application is deployed. Let’s look at the load balancer IP.

As we can see, the IP is

Now we go to the VMC on AWS console and request a public IP. Once we have one, we go to the NAT section and create a new rule:

Be sure to also create a rule for it in the compute gateway firewall.

Now when we access it, it works!:

Thank you for reading and I hope you learned something useful. Feel free to contact me in case of any questions.

Hi everyone,

In this post we will be making our own deployment VM to deploy Tanzu Kubernetes Grid on VMware Cloud on AWS. There is a demo appliance, but I would like to use the latest versions of everything. If you would like to know more about Tanzu Kubernetes Grid, please check out this link. This also works for on-prem, but the UI is different for NSX-T for the on-prem manager.

I’d like to give a special shout-out to the TKG Demo Appliance which can also be used and it includes demos and already has the programs. You can get it here and skip this section.

First we need a virtual machine to deploy it from. From my experience, deploying through a VPN does not work.

We start with an image. In my case, I like to use the Ubuntu Server cloudimage which you can get here.

Create a NSX-T segment in the VMware Cloud on AWS console for TKG. Also create a management group and rule so that the TKG segment can access the vCenter.

When you have it deployed, resize the VM. In my case I use 8 CPUs and 16GB of memory, but this is not something I really looked into. Once it’s on, login as the ubuntu user and execute this to become the root user:

sudo su -

And then we get to installing.

We will need the tkg CLI, kubectl and docker. Visit and download the TKG CLI for Linux, the Photon v3 Kubernetes v3 Kubernetes vx.xx.x OVA (get the latest version), Photon v3 capv haproxy vx.x.x OVA and VMware Tanzu Kubernetes Grid Extensions Manifest x.x

You can then SCP the .gz and .tar.gz files over to the VM, or upload them somewhere and then wget/curl them. The two OVAs should be uploaded to a Content Library that the VMC on AWS SDDC can access. Once you have done that, you deploy one VM from each OVA, and then convert this to a template.

To set up the deployment VM further, let’s install docker. We can do this with a one-liner:

curl | sudo sh

Once this is installed, get kubectl:

curl -LO "$(curl -s"

Then we extract the files:

tar xvf filename.tar.gz
gunzip filename.gz

And we run:

mv kubectl /bin/
mv tkg-linux-amd64-rest_of_filename /bin/tkg

Now we are ready. Let’s fire up a second SSH session. On the first SSH session, we run the init with the UI:

tkg init --ui

This will run the installer UI on port 8080. In the second SSH session, be on a new line that you have not typed into yet. Then press ~ (shift ` in most cases) and then C. Then type:

-L 8080:localhost:8080

This will open up port 8080 on your local system and forward it to port 8080 on the TKG VM.

Visit http://localhost:8080 and let’s get started.

Click on deploy under VMware vSphere.

In my case, I fill in the local IP of the vCenter, with the [email protected] user and the password. I press connect, then select a datacenter and fill in a public SSH key. (If you don’t have any, run ‘ ssh-keygen ‘ and then ‘ cat .ssh/ ‘

Click on next. Now we select if we want a single node development management cluster, of the production one with three management nodes. I select the production one for my case and select the appropriate size, which I will use medium for. Then you can fill in a management cluster name, and select the API server load balancer. You can also select the worker node instance type along with the load balancer and click on next.

Now we select a VM Folder to put it into, select the WorkloadDatastore for the datastore and the Compute-ResourcePool or a resource pool under it. Click on next.

Select the TKG network and in my case I leave the CIDRs as default. Click on next.

Select the OS image and click on next.

Click on review configuration and then on Deploy Management Cluster. Now we sit back and wait.

15 minutes later, and we are done for the management cluster!

Now we can deploy our first workload cluster. Go to the root shel where you run the init, and type:

tkg create cluster clustername --plan=prod/dev

# Example:
tkg create cluster tkg-workload-1 --plan=prod

Once that finishes, run:

tkg get credentials clustername

# Example:
tkg get credentials tkg-workload-1

And now you’re ready to start deploying applications. In the next posts, I’ll cover some demos and assigning an VMC on AWS Public IP to one of the kubernetes load balancers.

Thank you for reading, if you have any questions feel free to contact me.

I’m a vExpert now!

July 17, 2020 | vExpert, VMware | No Comments

Hi everyone,

I applied for the second-half vExpert applications and I got accepted!

This is the first time and I’m really, really happy! This is my fourth attempt and here is to hopefully more years of vExpert.

Here is my public profile page.

If you don’t know what vExpert is, you are missing out on a lot, quoting from the site:
“The VMware vExpert program is VMware’s global evangelism and advocacy program. The program is designed to put VMware’s marketing resources towards your advocacy efforts. Promotion of your articles, exposure at our global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap. The awards are for individuals, not companies, and last for one year. Employees of both customers and partners can receive the awards. In the application, we consider various community activities from the previous year as well as the current year’s (only for 2nd half applications) activities in determining who gets awards. We look to see that not only were you active but are still active in the path you chose to apply for.”

Thank you all for reading this short announcement!

See you in the next post!

Hi everyone,

Here is how I upgraded my NSX-T deployment from 3.0.0 to 3.0.1.

If you tried to in-place update a NSX-T Host that has multiple host switches, you will get an error. You can use this trick to get around this limitation.

First, we log into the NSX-T Manager, and go to the Upgrade tab under system. Here, we upload the file and continues with the upgrade.
We upgrade like normal until we get on the hosts tab. We SSH into the host and we first visit this URL: http://:8080/repository/metadata/manifest
This file contains the link to the zip file that we need to wget in /tmp on the ESXi host. Then we run:

esxcli software vib install -d /tmp/

Once the installation finishes, we refresh the Hosts tab and then we can continue onto the management nodes, and from there we can continue has normal.

However, once the upgrade finishes, you will need to F5 (sometimes CTRL F5 or CTRL R) the page so the error goes away.

Thank you for reading this quick post and I hope that it was useful to you.

Hi everyone,

In this post, we will be deploying a PyKMIP server that stores its keys in a database. Unlike the docker container, the keys will be saved so on a reboot your keys are not lost.

So what exactly is this for? Well, in my use-case, I will be using this server to encrypted virtual machine files and drives.

For this tutorial, we will be using self-signed certs and this keys will be stored in a sqlite database. This is not secure at all! However, it will allow you to evaluate and learn the KMS functions within vCenter.

What we will need:

  • Ubuntu Server 18.04 or 20.04 LTS installation ISO.
  • One virtual machine to install Ubuntu Server 18.04 or 20.04 LTS on.
  • A network connection to install some packages.

First what we do is we create a virtual machine. This is just how it’s always done. You create a Ubuntu VM and install Ubuntu on it, this should be straightforward.

Now comes the fun part. the green commands should be executed as a user, the red commands as root. Re-place <$username> with your regular account’s username.

sudo -i
apt-get update
apt-get upgrade
mkdir /usr/local/PyKMIP
mkdir /etc/pykmip
mkdir /var/log/pykmip
chown <$username>: -R /usr/local/PyKMIP
chown <$username>: -R /etc/pykmip
chown <$username>: -R /var/log/pykmip
apt -get install python2-dev libffi-dev libssl-dev libsqlite3-dev python2-setuptools python2-requests
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt

Then fill out the form for the SSL certificate. The above certificate will be valid for 10 years. (3650 days)

chown <$username>: -R /etc/ssl/private
chown <$username>: /etc/ssl/certs/selfsigned.crt

cd /usr/local
git clone

nano /etc/pykmip/server.conf

Paste the following into the file: (replace x.x.x.x with your VM’s IP)


Almost done! Now we need to edit our crontab to start the service at startup.

crontab -e

Paste the following in on a new line:

@reboot ( sleep 30s; python2 /usr/local/PyKMIP/bin/ & )

This will make sure that it starts automatically on startup. Reboot your VM or type this in to start it as a background process:

python2 /usr/local/PyKMIP/bin/ &

Now we need to go to our vCenter. We click on the vCenter and go to configure. Then under Key Providers, we click “Add Standard Key Provider”.

Give it a name under “Name” and “KMS”. Type in the IP address under “Address” and the port number, which by default is 5696 under “Port”. Then click on “Add Key Provider”.

Once you have done that we need establish trust. Click on the Key Provider, then at the bottom click on the KMS server. Click on “Establish Trust” followed by “Make KMS trust vCenter”. Click on “KMS certificate and private key” and then on “Next”.

Now, we need to fill in the KMS certificate and private key. On the VM, run:

cat /etc/ssl/certs/selfsigned.crt

Paste the output (with the dashes!) under KMS certificate.

cat /etc/ssl/private/selfsigned.key

Paste the output (with the dashes!) under “KMS Private Key”.

Now click on “Establish Trust” and we’re done! Now you should be able to use your new KMS server in your lab!

If you want to somewhat tighten security, don’t use the self-signed certificate but use your own certificates and lock down access to the VM, since the database with all your VM keys sits as a file on the filesystem of the VM.

If you have any questions, feel free to contact me through email or Twitter.

Have a great day!