Category: VMware Cloud on AWS

Home / Category: VMware Cloud on AWS

Hi,

In this post, we’ll be talking about setting up a Site Pair and interconnect with HCX, between your on-prem datacenter and your VMware Cloud on AWS SDDC.

Video version:

Written version:

In this post, we are going to create a Site Pair in HCX and an interconnect. Like the name suggests, a site pair pairs two HCX sites together, so you can create an interconnect between them. Interconnects are used for network stretching among other things.

First, we go to our on-prem vCenter. Under the HCX plugin, we go to site pairing. We fill in the FQDN of the HCX server, along with the [email protected] account and password. It will look like this once it is done:

Once this is done, we can create an interconnect. Click on Interconnect and then Compute Profiles. Create a compute profile.
Give it a name, select a cluster(s) where the HCX appliances should be located, then selected a datastore, folder and CPU/memory reservation settings.
Select a Management Network Profile, Uplink Network Profile, vMotion Network Profile, vSphere Replication Network Profile and a distrbuted switch to be used for network extensions.
Once it is done, you can view the required connections that should be allowed in your firewall.Once you are done, it will look something like this:

Now we can create a service mesh. Select a Source Compute Profile, which is the network profile you just created. Then select a Remote Network Profile, which is called ComputeProfile.
Select the Source Site Uplink Network Profile(s), which is the one(s) you created, and the Destination Site Uplink Network Profile(s), which is called externalNetwork. Select the appliance count (I leave it as 1) and optionally you can set a bandwidth limit.
Give the interconnect a friendly name and once it is done and everything is green (this takes a while), it will look this like:

And that’s it, we’re done! Now we are ready to use the functions of HCX, which I will cover in a next blog post.

Stay safe and I hope that you learned something. Feel free to contact me with any questions.

Hi everyone,

In this blog post, we will be going over how to deploy HCX on both your VMware on AWS SDDC and also on-prem.

So what exactly is HCX? One of the main features of HCX is stretching networks between on-prem and your VMC on AWS SDDC. (It does not have to specifically be to/from a VMC SDDC). Also you can migrate VMs, as a vMotion (live) migration, or as a bulk migration.

For the visual people, here is a video:

For the people who prefer to read, here’s a written version of the video.

In order to use HCX, we first need to enable it. Go to your SDDC console. Click on add ons, and then activate HCX. This will take a while, so sit back while it activates.
Once it has activated, click on “Open HCX”.

Once you are in the HCX console, click on Activate HCX at the SDDC you wish to activate HCX for. This will take a while. HCX will be deployed for you.

Once HCX is activated, you will need to create some firewall rules. Go to your SDDC console and under Networking & Security go to Gateway Firewall. Under the Management Gateway, create a new rule. The source is the user-group of your on-prem network and the destination is the HCX system group. Publish this rule and you should now be able to access the HCX VM. Back to the HCX console, click on Open HCX and login with the [email protected] account (or any other admin account).

Under Administration, click on System Updates. Then click on Request Download link.

This will generate a download link for you that you can use to download the HCX Connector. You can either download it and upload the OVA, or copy the URL and paste it into vCenter.

Which brings us to our local vCenter. Deploy a OVA template in your cluster (or on a single host) and go through the process. Fill in the information like you always do. It will ask for things like a password, FQDN, IP, gateway, and the usual questions. Let it deploy, depending on your configuration this may take a little bit. Don’t forget to power it on after deployment and let it start up.

Once it has booted up, open up your web browser and visit:
https://hcxconnector-fqdn:9443/
Then login with the username admin and the password you set during OVA deployment.

Now we need to fill in our HCX License Key.

Go back to the VMC HCX Console and click on Activation Keys. Click on Create Activation Key. Then select your VMC on AWS subscription and then HCX Connector under System Type. Copy the generated key and paste that in the HCX License Key field on the HCX Connector, then click on Activate.

Fill in the location of your on-prem datacenter, then on the next screen fill in a system name. Click on Continue and now you will be given the option of entering your on-prem vCenter and optionally on-prem NSX.

For the identity source, fill in your vCenter’s FQDN if you have the embedded PSC deployment. (Which you should, and if not then migrate to it, since the external PSC is deprecated with vCenter 6.7 and higher. With vCenter 7, it’s not even an option anymore during deployment.)

Next you can click on next, and then click on Restart. This will restart the HCX Connector service and you are up and running after this.

In the next video and blog post, we will be doing a Site Pair and create an Interconnect.

If you have any questions, feel free to email me or tweet at me.

Have a great day and I hope to see you in the next one.

Hi readers,

In this post, we will be taking our previously deployed TKG workload cluster and make it accessible through a public IP using NAT.

First, we deploy an application. I’ll use yelb. See the previous post on how that works.

Here we see that our application is deployed. Let’s look at the load balancer IP.

As we can see, the IP is 10.250.20.61

Now we go to the VMC on AWS console and request a public IP. Once we have one, we go to the NAT section and create a new rule:

Be sure to also create a rule for it in the compute gateway firewall.

Now when we access it, it works!:

Thank you for reading and I hope you learned something useful. Feel free to contact me in case of any questions.

Hi everyone,

In this post we will be making our own deployment VM to deploy Tanzu Kubernetes Grid on VMware Cloud on AWS. There is a demo appliance, but I would like to use the latest versions of everything. If you would like to know more about Tanzu Kubernetes Grid, please check out this link. This also works for on-prem, but the UI is different for NSX-T for the on-prem manager.

I’d like to give a special shout-out to the TKG Demo Appliance which can also be used and it includes demos and already has the programs. You can get it here and skip this section.

First we need a virtual machine to deploy it from. From my experience, deploying through a VPN does not work.

We start with an image. In my case, I like to use the Ubuntu Server cloudimage which you can get here.

Create a NSX-T segment in the VMware Cloud on AWS console for TKG. Also create a management group and rule so that the TKG segment can access the vCenter.

When you have it deployed, resize the VM. In my case I use 8 CPUs and 16GB of memory, but this is not something I really looked into. Once it’s on, login as the ubuntu user and execute this to become the root user:

sudo su -

And then we get to installing.

We will need the tkg CLI, kubectl and docker. Visit https://vmware.com/go/get-tkg and download the TKG CLI for Linux, the Photon v3 Kubernetes v3 Kubernetes vx.xx.x OVA (get the latest version), Photon v3 capv haproxy vx.x.x OVA and VMware Tanzu Kubernetes Grid Extensions Manifest x.x

You can then SCP the .gz and .tar.gz files over to the VM, or upload them somewhere and then wget/curl them. The two OVAs should be uploaded to a Content Library that the VMC on AWS SDDC can access. Once you have done that, you deploy one VM from each OVA, and then convert this to a template.

To set up the deployment VM further, let’s install docker. We can do this with a one-liner:

curl https://get.docker.com | sudo sh

Once this is installed, get kubectl:

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"

Then we extract the files:

tar xvf filename.tar.gz
gunzip filename.gz

And we run:

mv kubectl /bin/
mv tkg-linux-amd64-rest_of_filename /bin/tkg

Now we are ready. Let’s fire up a second SSH session. On the first SSH session, we run the init with the UI:

tkg init --ui

This will run the installer UI on port 8080. In the second SSH session, be on a new line that you have not typed into yet. Then press ~ (shift ` in most cases) and then C. Then type:

-L 8080:localhost:8080

This will open up port 8080 on your local system and forward it to port 8080 on the TKG VM.

Visit http://localhost:8080 and let’s get started.

Click on deploy under VMware vSphere.

In my case, I fill in the local IP of the vCenter, with the [email protected] user and the password. I press connect, then select a datacenter and fill in a public SSH key. (If you don’t have any, run ‘ ssh-keygen ‘ and then ‘ cat .ssh/id_rsa.pub ‘

Click on next. Now we select if we want a single node development management cluster, of the production one with three management nodes. I select the production one for my case and select the appropriate size, which I will use medium for. Then you can fill in a management cluster name, and select the API server load balancer. You can also select the worker node instance type along with the load balancer and click on next.

Now we select a VM Folder to put it into, select the WorkloadDatastore for the datastore and the Compute-ResourcePool or a resource pool under it. Click on next.

Select the TKG network and in my case I leave the CIDRs as default. Click on next.

Select the OS image and click on next.

Click on review configuration and then on Deploy Management Cluster. Now we sit back and wait.

15 minutes later, and we are done for the management cluster!

Now we can deploy our first workload cluster. Go to the root shel where you run the init, and type:

tkg create cluster clustername --plan=prod/dev

# Example:
tkg create cluster tkg-workload-1 --plan=prod

Once that finishes, run:

tkg get credentials clustername

# Example:
tkg get credentials tkg-workload-1

And now you’re ready to start deploying applications. In the next posts, I’ll cover some demos and assigning an VMC on AWS Public IP to one of the kubernetes load balancers.

Thank you for reading, if you have any questions feel free to contact me.