Category: VMware

Home / Category: VMware

Hi everyone,

In this post we will be making our own deployment VM to deploy Tanzu Kubernetes Grid on VMware Cloud on AWS. There is a demo appliance, but I would like to use the latest versions of everything. If you would like to know more about Tanzu Kubernetes Grid, please check out this link. This also works for on-prem, but the UI is different for NSX-T for the on-prem manager.

I’d like to give a special shout-out to the TKG Demo Appliance which can also be used and it includes demos and already has the programs. You can get it here and skip this section.

First we need a virtual machine to deploy it from. From my experience, deploying through a VPN does not work.

We start with an image. In my case, I like to use the Ubuntu Server cloudimage which you can get here.

Create a NSX-T segment in the VMware Cloud on AWS console for TKG. Also create a management group and rule so that the TKG segment can access the vCenter.

When you have it deployed, resize the VM. In my case I use 8 CPUs and 16GB of memory, but this is not something I really looked into. Once it’s on, login as the ubuntu user and execute this to become the root user:

sudo su -

And then we get to installing.

We will need the tkg CLI, kubectl and docker. Visit https://vmware.com/go/get-tkg and download the TKG CLI for Linux, the Photon v3 Kubernetes v3 Kubernetes vx.xx.x OVA (get the latest version), Photon v3 capv haproxy vx.x.x OVA and VMware Tanzu Kubernetes Grid Extensions Manifest x.x

You can then SCP the .gz and .tar.gz files over to the VM, or upload them somewhere and then wget/curl them. The two OVAs should be uploaded to a Content Library that the VMC on AWS SDDC can access. Once you have done that, you deploy one VM from each OVA, and then convert this to a template.

To set up the deployment VM further, let’s install docker. We can do this with a one-liner:

curl https://get.docker.com | sudo sh

Once this is installed, get kubectl:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"

Then we extract the files:

tar xvf filename.tar.gz
gunzip filename.gz

And we run:

mv kubectl /bin/
mv tkg-linux-amd64-rest_of_filename /bin/tkg

Now we are ready. Let’s fire up a second SSH session. On the first SSH session, we run the init with the UI:

tkg init --ui

This will run the installer UI on port 8080. In the second SSH session, be on a new line that you have not typed into yet. Then press ~ (shift ` in most cases) and then C. Then type:

-L 8080:localhost:8080

This will open up port 8080 on your local system and forward it to port 8080 on the TKG VM.

Visit http://localhost:8080 and let’s get started.

Click on deploy under VMware vSphere.

In my case, I fill in the local IP of the vCenter, with the [email protected] user and the password. I press connect, then select a datacenter and fill in a public SSH key. (If you don’t have any, run ‘ ssh-keygen ‘ and then ‘ cat .ssh/id_rsa.pub ‘

Click on next. Now we select if we want a single node development management cluster, of the production one with three management nodes. I select the production one for my case and select the appropriate size, which I will use medium for. Then you can fill in a management cluster name, and select the API server load balancer. You can also select the worker node instance type along with the load balancer and click on next.

Now we select a VM Folder to put it into, select the WorkloadDatastore for the datastore and the Compute-ResourcePool or a resource pool under it. Click on next.

Select the TKG network and in my case I leave the CIDRs as default. Click on next.

Select the OS image and click on next.

Click on review configuration and then on Deploy Management Cluster. Now we sit back and wait.

15 minutes later, and we are done for the management cluster!

Now we can deploy our first workload cluster. Go to the root shel where you run the init, and type:

tkg create cluster clustername --plan=prod/dev

# Example:
tkg create cluster tkg-workload-1 --plan=prod

Once that finishes, run:

tkg get credentials clustername

# Example:
tkg get credentials tkg-workload-1

And now you’re ready to start deploying applications. In the next posts, I’ll cover some demos and assigning an VMC on AWS Public IP to one of the kubernetes load balancers.

Thank you for reading, if you have any questions feel free to contact me.

I’m a vExpert now!

July 17, 2020 | vExpert, VMware | No Comments

Hi everyone,

I applied for the second-half vExpert applications and I got accepted!

This is the first time and I’m really, really happy! This is my fourth attempt and here is to hopefully more years of vExpert.

Here is my public profile page.

If you don’t know what vExpert is, you are missing out on a lot, quoting from the site:
“The VMware vExpert program is VMware’s global evangelism and advocacy program. The program is designed to put VMware’s marketing resources towards your advocacy efforts. Promotion of your articles, exposure at our global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap. The awards are for individuals, not companies, and last for one year. Employees of both customers and partners can receive the awards. In the application, we consider various community activities from the previous year as well as the current year’s (only for 2nd half applications) activities in determining who gets awards. We look to see that not only were you active but are still active in the path you chose to apply for.”

Thank you all for reading this short announcement!

See you in the next post!

Hi everyone,

Here is how I upgraded my NSX-T deployment from 3.0.0 to 3.0.1.

If you tried to in-place update a NSX-T Host that has multiple host switches, you will get an error. You can use this trick to get around this limitation.

First, we log into the NSX-T Manager, and go to the Upgrade tab under system. Here, we upload the file and continues with the upgrade.
We upgrade like normal until we get on the hosts tab. We SSH into the host and we first visit this URL: http://:8080/repository/metadata/manifest
This file contains the link to the zip file that we need to wget in /tmp on the ESXi host. Then we run:

esxcli software vib install -d /tmp/filenamehere.zip

Once the installation finishes, we refresh the Hosts tab and then we can continue onto the management nodes, and from there we can continue has normal.

However, once the upgrade finishes, you will need to F5 (sometimes CTRL F5 or CTRL R) the page so the error goes away.

Thank you for reading this quick post and I hope that it was useful to you.

Hi everyone,

In this post, we will be deploying a PyKMIP server that stores its keys in a database. Unlike the docker container, the keys will be saved so on a reboot your keys are not lost.

So what exactly is this for? Well, in my use-case, I will be using this server to encrypted virtual machine files and drives.

For this tutorial, we will be using self-signed certs and this keys will be stored in a sqlite database. This is not secure at all! However, it will allow you to evaluate and learn the KMS functions within vCenter.

What we will need:

  • Ubuntu Server 18.04 or 20.04 LTS installation ISO.
  • One virtual machine to install Ubuntu Server 18.04 or 20.04 LTS on.
  • A network connection to install some packages.

First what we do is we create a virtual machine. This is just how it’s always done. You create a Ubuntu VM and install Ubuntu on it, this should be straightforward.

Now comes the fun part. the green commands should be executed as a user, the red commands as root. Re-place <$username> with your regular account’s username.

sudo -i
apt-get update
apt-get upgrade
mkdir /usr/local/PyKMIP
mkdir /etc/pykmip
mkdir /var/log/pykmip
chown <$username>: -R /usr/local/PyKMIP
chown <$username>: -R /etc/pykmip
chown <$username>: -R /var/log/pykmip
apt -get install python2-dev libffi-dev libssl-dev libsqlite3-dev python2-setuptools python2-requests
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt

Then fill out the form for the SSL certificate. The above certificate will be valid for 10 years. (3650 days)

chown <$username>: -R /etc/ssl/private
chown <$username>: /etc/ssl/certs/selfsigned.crt
exit


cd /usr/local
git clone https://github.com/OpenKMIP/PyKMIP

nano /etc/pykmip/server.conf

Paste the following into the file: (replace x.x.x.x with your VM’s IP)

[server]
database_path=/etc/pykmip/pykmip.database
hostname=x.X.X.X
port=5696
certificate_path=/etc/ssl/certs/selfsigned.crt
key_path=/etc/ssl/private/selfsigned.key
ca_path=/etc/ssl/certs/selfsigned.crt
auth_suite=TLS1.2
policy_path=/usr/local/PyKMIP/examples/
enable_tls_client_auth=False
tls_cipher_suites=
TLS_RSA_WITH_AES_128_CBC_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
logging_level=DEBUG

Almost done! Now we need to edit our crontab to start the service at startup.

crontab -e

Paste the following in on a new line:

@reboot ( sleep 30s; python2 /usr/local/PyKMIP/bin/run_server.py & )

This will make sure that it starts automatically on startup. Reboot your VM or type this in to start it as a background process:

python2 /usr/local/PyKMIP/bin/run_server.py &

Now we need to go to our vCenter. We click on the vCenter and go to configure. Then under Key Providers, we click “Add Standard Key Provider”.

Give it a name under “Name” and “KMS”. Type in the IP address under “Address” and the port number, which by default is 5696 under “Port”. Then click on “Add Key Provider”.

Once you have done that we need establish trust. Click on the Key Provider, then at the bottom click on the KMS server. Click on “Establish Trust” followed by “Make KMS trust vCenter”. Click on “KMS certificate and private key” and then on “Next”.

Now, we need to fill in the KMS certificate and private key. On the VM, run:

cat /etc/ssl/certs/selfsigned.crt

Paste the output (with the dashes!) under KMS certificate.

cat /etc/ssl/private/selfsigned.key

Paste the output (with the dashes!) under “KMS Private Key”.

Now click on “Establish Trust” and we’re done! Now you should be able to use your new KMS server in your lab!

If you want to somewhat tighten security, don’t use the self-signed certificate but use your own certificates and lock down access to the VM, since the database with all your VM keys sits as a file on the filesystem of the VM.

If you have any questions, feel free to contact me through email or Twitter.

Have a great day!

Hi everyone,

Time for a new blog post about the adventure I’ve been having for the past two days.

On June 2nd, my server, a DL380G6 started to suddenly take-off. The fans were roaring at 70%, and the temperature sensor near the PCI Riser was showing 110 C. Of course this is not supposed to happen, in this blog post, I will take you through my experience of troubleshooting this and hopefully this can help you.

Okay, so we know that the temperature sensor is showing a very high temperature. Let’s remove the PCI riser and see if the sensor is defective or not. I removed it, and it was showing a normal 60 C. Okay, so there is some PCI card that is getting very hot.

Step two: we put the riser back in. However, since my has two cards in it, I leave one out. I left the Smart Array P420 RAID controller in, and took the quad gigabit NIC out. I turn it back on, and once the sensor initializes, I check it. It was once again showing 110 C. Very odd.

Next I swap the cards around, having the NIC in the bottom slot, and I take out the RAID controller. I once again turn it back on, and now it seems to be fine. Temperature shows 64 C. Well, it seems to be the controller then.

Next step: let’s put the RAID controller back in, and leave its backup capacitor disconnected. I saw that in the IML that the controller says that it is defective. Maybe this causes it. I disconnect the backup capacitor, and now all seems okay. So maybe it was the backup capacitor.

I put the network card back in, start the server once again and all seems well. I boot the ESXi host and let VMs slowly start back up. However, we’re not done yet!

VMware suddenly gives a purple screen of death, as you can see in this screenshot:

I thought it was a one-time thing. (in a production environment, you should not do this! Immediately investigate why the crash occurred!). I restart the server and try agan. This time, it crashes, once again. When I look at the logs through the debugger, I see that nothing really is showing, other than slow response times.

I shut the server back down and take out the controller. The heatsink feels quite warm. I press on it a little bit to make sure that it’s still secured in place, and check the SAS connectors to make sure that they are seated in properly with no dust in them. I turn the server back on, however, now it’s taking off again. Showing 113 C on the sensor.

By accident while taking out the riser card, I touch the heatsink of my RAID controller and burn my hand. So the problem is definitely not fixed yet, and properly the controller crashed because of overheating.

I removed the controller and put the SAS cables in the on-board P410i controller. Temperatures are normal and the server has been running for a bit over a day without crashing.

Ultimately, it looks like my P420 controller has died. I should still have warranty on it from the company I bought it from, so I’m going to try to RMA it. Hopefully that will be possible.

Thank you for reading, if you have any questions feel free to contact me on my website or Twitter and I hope you learned something.

Have a great day

Hi all!

This post will be about the current state (05/20/2020 06/04/2020) of my home lab. Please keep in mind that I also have two ESXi hosts that I rent from a datacenter in Germany that I partially use for my home lab (though they are nowhere near as powerful as my home server).

Here are some pictures:

The black device on the wall is my ISP’s modem. It’s set to bridge mode, meaning it does not do any NAT, DHCP, etc. That routes to my EdgeRouter (which you can see on the edge of the plank in the first picture). This is the main router. It runs DHCP, does NAT, runs a BGP daemon and I have a VLAN on there for NSX-T.

The host you see here, is my HP Proliant DL380G6. It has two Intel Xeon X5660s (6 cores/12 threads at 2.8 GHz), and 288 GB of DDR3 ECC memory at 1333 MHz. I have six drives in it as you can see, they are connected with two SAS cables to an extra RAID card I have in the server, a Smart Array P420. I have two 2TB HDDs in it, a 320GB HDD, two 500GB SSDs and (now, with the update) two 1TB SSDs. Sadly on June 2nd 2020 my P420 controller died, more info here, so right now I use the build-in Smart Array P410i. The colorful cables all go up through the ceiling, into my bedroom’s floor, to a network switch as you can see down here:

Here you can see my Raspberry Pi collection,stacked on my Humax decoder. The black switch at the bottom is my 24 port non-PoE EdgeSwitch 24 Lite. It’s currently full. Stacked on top I have my older TP-Link TL-SG2216. Currently it’s not in use… yet.
Laying on that switch in a UniFi UAP-AC-PRO (more on that later). On the blue box I have a Raspberry Pi 4 Model B 4GB. I use this as a test machine sometimes. On the upper plank I have a Unifi Security Gateway for the WiFi and Guest network.
Next to that is a Unifi 8 port 60W PoE switch. Connected to that is the UAP-AC-PRO you see in the picture, and there’s one downstairs as well. Next to that is a Raspberry Pi 3 Model B I believe, connected to a ADS-B receiver dongle with matching antenna next to the RPi.
There used to a second RPi to the right of it, but it’s on my project table at the moment. That used to be connected with a SDR dongle, and has its antenna on the plank below, on the right side against the wall. That’s my indoor antenna I use to listen in on the airbands (which in The Netherlands is legal at the time of writing).

That’s the current state of my homelab right now. Hopefully it gives you an idea on what I run right now. It’s not done yet… I possibly need to update in a few years as officially, my CPUs don’t support ESXi 7.

I also want to go10 gigabit at some point, but that’s all years away most likely.

Thank you for reading and have a great day!

Hi everyone,

A few days ago, I was surprised with a Twitter DM from the VMUG Advantage Twitter account.

It turns out, that secretly, Heath Johnson from VMware has been in touch with the VMUG Advantage team about me without telling me.

I’m not sure how he managed to pull it off, but VMUG Advantage and me and partnering up! I’ve been given a sponsored 1 year VMUG Advantage subscription, which is amazing! And many, many thanks to Heath for making it happen!

Of course, in return I’ll blog about my adventures with VMUG Advantage. TestDrive is something that I have been exploring lately along with how much easier it’ll be for me to get things set up in my lab using for evaluation licenses.

If you are not sure what you get with VMUG Advantage, or not even sure what it is, let me explain it for you.

VMUG Advantage is a subscription you can get, which gives you discounts amongst other benefits. You get 100$ off of VMworld and you get 20-35% off of training.
Other than that, you also get access to 365-day evaluation licenses. These non-production use licenses are valid for 365 days and are perfect for use in your lab. You also get the downloads, though it may take a bit for the latest version of a product to be on it. This is what is available as of writing this, as of May 4th 2020:

  • Workstation 15 Pro
  • Fusion 11 Pro
  • Cloud Foundation 3.9.1
  • NSX-T 3.0
  • Site Recovery Manager
  • vRealize Suite 2019
  • vRealize Network Insight
  • vSAN 7
  • vSphere 7
  • vCenter 7
  • vSphere 6.x
  • vCenter 6.x
  • vSAN 6.x
  • NSX-V
  • vRealize Orchestrator
  • vRealize Operations for Horizon
  • Horizon Advanced Edition
  • vCloud Suite Standard

I’m not entirely sure of all the versions, but this is what I got.

Not only that (which in my opinion is already amazing), you also get access to VMware TestDrive. With TestDrive you get access to multiple product environments, even some of them as sandboxes. This includes:

Ready to Use Experiences:

  • Workspace ONE
  • Workspace ONE UEM
  • Horizon Cloud
  • Horizon
  • App Volumes
  • Dynamic Environment Manager
  • vSAN
  • PKS
  • velocloud
  • AppDefense

Sandbox Experiences:

  • Workspace ONE UEM
  • Workspace ONE Access
  • Workspace ONE Express

You also get access to the following Sample Integrations:

  • Dropbox
  • Office 365
  • Salesforce

I’m very, very thankful for Health to organize this and I’m very excited to make more blog posts about it. It’s coming soon along with other blog posts about some lab changes.

See you in the next post!

Hi everyone,

Here’s a quick tip for the home lab people with old servers that can’t afford to get new hardware (like me).

It seems like that you can override the installer terminating when an unsupported CPU is detected.

What you need to do, is when booting from the ESXi ISO, press SHIFT+O, and type in:

allowLegacyCPU=true

This will allow you to install or upgrade an ESXi 6.7 installation.

I’ve tested this from my server with two X5660s and have no issues so far currently there is an issue where if you run ESXi 7.0 on older hardware, you may not be able to start virtual machines in a nested VM. For example, if you have a Linux VM on a virtual ESXi host VM on the physical host, starting the Linux VM may crash your nested ESXi host. This was an upgrade from 6.7 to 7.0, however, I have not tested this with an upgrade from other 6.x versions to 7.0, please let me know on Twitter and I will update the blog post.

Of course, this is not supported in any way. However, it’s good for us people that can’t afford to buy new hardware with newer CPUs. It means we get to use our old hardware for a bit longer.

Thank you for reading and I hope to see you soon in new blog posts. The blog posts are coming back, with good news to come 🙂

Stay safe and have a great day!

Hi readers,

In this part of the series, we will be deploying the VMware Horizon Unitied Access Gateway Appliance. It’s similar to the old Horizon Security server, and I myself mainly use it so I can connect to my Horizon connection server from a public IP address. (from my /24 block)

First what we do is download and deploy the UAG OVA template. In my set-up, a normal deployment will suffice and two NICs are enough. One is for the internal LAN, and one is for the external network.

Continue to go through the steps and turn the VM on, then after a while browse to the IP of the appliance on port 9443 followed by logging in with the admin account and password you provided during installation.

When we log in we get a screen, on this screen we click on select under manual.
Enable “Edge Service Settings” and click on the gear at Horizon Settings, then enable Horizon and copy the settings below. PCOIP URL should be the public IP address of the UAG. Blast and Tunnel External URL should be the public FQDN of the UAG.

Next we log into the Connection Server. Click on Servers under Settings and then click on Connection Servers. Click on your Connection Server and then edit.

We want to disable Secure Tunnel, PCoIP Secure Gateway and Blast Secure Gateway, as our UAG will handling doing this.

We can also let the UAG appear under gateways in the dashboard. To do this, we log into the UAG and click on select under manual again (if you have logged out already). Then we click on the gear at System Configuration under Advanced Settings. Change the UAG name to something friendly. We will need it later.

Back to the Horizon 7 Console, we expand Settings and then click on Servers. Click on Gateway and click Register. In here, fill in the friendly name you gave the UAG in the previous step.

Now the UAG shows in the dashboard.

In order the access the HTML UI through the UAG, we need to either disable Origin Checks on the Connection Server, or configure the Connection Server’s locked.properties with the UAG addresses. You only have to do one of them, but both is followed by restarting the “VMware Horizon View Connection Server” service. (Disable origin checks is showed below.)

One final thing that I want to do is change the TLS and chiper settings: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA should give you good security and good results. You can change this under the UAG Admin page, then under Advanced Settings followed by System Configuration.

Finally, I want to configure a trusted SSL certificate for the internet facing side. We can do this under “TLS Server Certificate Settings” under Advanced Settings in the UAG Admin panel. You will have to upload the private key file and the full chain certificate file along with choosing what interface to apply it to. In my case I selected Internet interface.

This covers this part of the Horizon 7.11 series. In the next part, we will be creating a Windows 10 Desktop image.

I hope that this was useful for you and see you in the next post.