Time for a new blog post about the adventure I’ve been having for the past two days.
On June 2nd, my server, a DL380G6 started to suddenly take-off. The fans were roaring at 70%, and the temperature sensor near the PCI Riser was showing 110 C. Of course this is not supposed to happen, in this blog post, I will take you through my experience of troubleshooting this and hopefully this can help you.
Okay, so we know that the temperature sensor is showing a very high temperature. Let’s remove the PCI riser and see if the sensor is defective or not. I removed it, and it was showing a normal 60 C. Okay, so there is some PCI card that is getting very hot.
Step two: we put the riser back in. However, since my has two cards in it, I leave one out. I left the Smart Array P420 RAID controller in, and took the quad gigabit NIC out. I turn it back on, and once the sensor initializes, I check it. It was once again showing 110 C. Very odd.
Next I swap the cards around, having the NIC in the bottom slot, and I take out the RAID controller. I once again turn it back on, and now it seems to be fine. Temperature shows 64 C. Well, it seems to be the controller then.
Next step: let’s put the RAID controller back in, and leave its backup capacitor disconnected. I saw that in the IML that the controller says that it is defective. Maybe this causes it. I disconnect the backup capacitor, and now all seems okay. So maybe it was the backup capacitor.
I put the network card back in, start the server once again and all seems well. I boot the ESXi host and let VMs slowly start back up. However, we’re not done yet!
VMware suddenly gives a purple screen of death, as you can see in this screenshot:
I thought it was a one-time thing. (in a production environment, you should not do this! Immediately investigate why the crash occurred!). I restart the server and try agan. This time, it crashes, once again. When I look at the logs through the debugger, I see that nothing really is showing, other than slow response times.
I shut the server back down and take out the controller. The heatsink feels quite warm. I press on it a little bit to make sure that it’s still secured in place, and check the SAS connectors to make sure that they are seated in properly with no dust in them. I turn the server back on, however, now it’s taking off again. Showing 113 C on the sensor.
By accident while taking out the riser card, I touch the heatsink of my RAID controller and burn my hand. So the problem is definitely not fixed yet, and properly the controller crashed because of overheating.
I removed the controller and put the SAS cables in the on-board P410i controller. Temperatures are normal and the server has been running for a bit over a day without crashing.
Ultimately, it looks like my P420 controller has died. I should still have warranty on it from the company I bought it from, so I’m going to try to RMA it. Hopefully that will be possible.
Thank you for reading, if you have any questions feel free to contact me on my website or Twitter and I hope you learned something.
Have a great day
This post will be about the current state (
05/20/2020 06/04/2020) of my home lab. Please keep in mind that I also have two ESXi hosts that I rent from a datacenter in Germany that I partially use for my home lab (though they are nowhere near as powerful as my home server).
Here are some pictures:
The black device on the wall is my ISP’s modem. It’s set to bridge mode, meaning it does not do any NAT, DHCP, etc. That routes to my EdgeRouter (which you can see on the edge of the plank in the first picture). This is the main router. It runs DHCP, does NAT, runs a BGP daemon and I have a VLAN on there for NSX-T.
The host you see here, is my HP Proliant DL380G6. It has two Intel Xeon X5660s (6 cores/12 threads at 2.8 GHz), and 288 GB of DDR3 ECC memory at 1333 MHz. I have six drives in it as you can see, they are connected with two SAS cables to an extra RAID card I have in the server, a Smart Array P420. I have two 2TB HDDs in it, a 320GB HDD, two 500GB SSDs and (now, with the update) two 1TB SSDs. Sadly on June 2nd 2020 my P420 controller died, more info here, so right now I use the build-in Smart Array P410i. The colorful cables all go up through the ceiling, into my bedroom’s floor, to a network switch as you can see down here:
Here you can see my Raspberry Pi collection,stacked on my Humax decoder. The black switch at the bottom is my 24 port non-PoE EdgeSwitch 24 Lite. It’s currently full. Stacked on top I have my older TP-Link TL-SG2216. Currently it’s not in use… yet.
Laying on that switch in a UniFi UAP-AC-PRO (more on that later). On the blue box I have a Raspberry Pi 4 Model B 4GB. I use this as a test machine sometimes. On the upper plank I have a Unifi Security Gateway for the WiFi and Guest network.
Next to that is a Unifi 8 port 60W PoE switch. Connected to that is the UAP-AC-PRO you see in the picture, and there’s one downstairs as well. Next to that is a Raspberry Pi 3 Model B I believe, connected to a ADS-B receiver dongle with matching antenna next to the RPi.
There used to a second RPi to the right of it, but it’s on my project table at the moment. That used to be connected with a SDR dongle, and has its antenna on the plank below, on the right side against the wall. That’s my indoor antenna I use to listen in on the airbands (which in The Netherlands is legal at the time of writing).
That’s the current state of my homelab right now. Hopefully it gives you an idea on what I run right now. It’s not done yet… I possibly need to update in a few years as officially, my CPUs don’t support ESXi 7.
I also want to go10 gigabit at some point, but that’s all years away most likely.
Thank you for reading and have a great day!
A few days ago, I was surprised with a Twitter DM from the VMUG Advantage Twitter account.
It turns out, that secretly, Heath Johnson from VMware has been in touch with the VMUG Advantage team about me without telling me.
I’m not sure how he managed to pull it off, but VMUG Advantage and me and partnering up! I’ve been given a sponsored 1 year VMUG Advantage subscription, which is amazing! And many, many thanks to Heath for making it happen!
Of course, in return I’ll blog about my adventures with VMUG Advantage. TestDrive is something that I have been exploring lately along with how much easier it’ll be for me to get things set up in my lab using for evaluation licenses.
If you are not sure what you get with VMUG Advantage, or not even sure what it is, let me explain it for you.
VMUG Advantage is a subscription you can get, which gives you discounts amongst other benefits. You get 100$ off of VMworld and you get 20-35% off of training.
Other than that, you also get access to 365-day evaluation licenses. These non-production use licenses are valid for 365 days and are perfect for use in your lab. You also get the downloads, though it may take a bit for the latest version of a product to be on it. This is what is available as of writing this, as of May 4th 2020:
I’m not entirely sure of all the versions, but this is what I got.
Not only that (which in my opinion is already amazing), you also get access to VMware TestDrive. With TestDrive you get access to multiple product environments, even some of them as sandboxes. This includes:
Ready to Use Experiences:
You also get access to the following Sample Integrations:
I’m very, very thankful for Health to organize this and I’m very excited to make more blog posts about it. It’s coming soon along with other blog posts about some lab changes.
See you in the next post!
Here’s a quick tip for the home lab people with old servers that can’t afford to get new hardware (like me).
It seems like that you can override the installer terminating when an unsupported CPU is detected.
What you need to do, is when booting from the ESXi ISO, press CTRL+O, and type in:
This will allow you to install or upgrade an ESXi 6.7 installation.
I’ve tested this from my server with two X5660s and have
no issues so far currently there is an issue where if you run ESXi 7.0 on older hardware, you may not be able to start virtual machines in a nested VM. For example, if you have a Linux VM on an ESXi host VM on the physical host, starting the Linux VM may crash your nested ESXi host. This was an upgrade from 6.7 to 7.0, however, I have not tested this with an upgrade from other 6.x versions to 7.0, please let me know on Twitter and I will update the blog post.
Of course, this is not supported in any way. However, it’s good for us people that can’t afford to buy new hardware with newer CPUs. It means we get to use our old hardware for a bit longer.
Thank you for reading and I hope to see you soon in new blog posts. The blog posts are coming back, with good news to come 🙂
Stay safe and have a great day!
In this part of the series, we will be deploying the VMware Horizon Unitied Access Gateway Appliance. It’s similar to the old Horizon Security server, and I myself mainly use it so I can connect to my Horizon connection server from a public IP address. (from my /24 block)
First what we do is download and deploy the UAG OVA template. In my set-up, a normal deployment will suffice and two NICs are enough. One is for the internal LAN, and one is for the external network.
Continue to go through the steps and turn the VM on, then after a while browse to the IP of the appliance on port 9443 followed by logging in with the admin account and password you provided during installation.
When we log in we get a screen, on this screen we click on select under manual.
Enable “Edge Service Settings” and click on the gear at Horizon Settings, then enable Horizon and copy the settings below. PCOIP URL should be the public IP address of the UAG. Blast and Tunnel External URL should be the public FQDN of the UAG.
Next we log into the Connection Server. Click on Servers under Settings and then click on Connection Servers. Click on your Connection Server and then edit.
We want to disable Secure Tunnel, PCoIP Secure Gateway and Blast Secure Gateway, as our UAG will handling doing this.
We can also let the UAG appear under gateways in the dashboard. To do this, we log into the UAG and click on select under manual again (if you have logged out already). Then we click on the gear at System Configuration under Advanced Settings. Change the UAG name to something friendly. We will need it later.
Back to the Horizon 7 Console, we expand Settings and then click on Servers. Click on Gateway and click Register. In here, fill in the friendly name you gave the UAG in the previous step.
Now the UAG shows in the dashboard.
In order the access the HTML UI through the UAG, we need to either disable Origin Checks on the Connection Server, or configure the Connection Server’s locked.properties with the UAG addresses. You only have to do one of them, but both is followed by restarting the “VMware Horizon View Connection Server” service. (Disable origin checks is showed below.)
One final thing that I want to do is change the TLS and chiper settings: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA should give you good security and good results. You can change this under the UAG Admin page, then under Advanced Settings followed by System Configuration.
Finally, I want to configure a trusted SSL certificate for the internet facing side. We can do this under “TLS Server Certificate Settings” under Advanced Settings in the UAG Admin panel. You will have to upload the private key file and the full chain certificate file along with choosing what interface to apply it to. In my case I selected Internet interface.
This covers this part of the Horizon 7.11 series. In the next part, we will be creating a Windows 10 Desktop image.
I hope that this was useful for you and see you in the next post.
In this third part of the series, we will be deploying the Connection Server., the base of the Horizon package.
First, we will need a server or virtual machine running Windows Server 2012 or higher. The OS requirement is simple (source):
|Windows Server 2008 R2 SP1||64-bit||Standard Enterprise Datacenter|
|Windows Server 2012 R2||64-bit||Standard Datacenter|
|Windows Server 2016||64-bit||Standard Datacenter|
|Windows Server 2019||64-bit||Standard Datacenter|
The following hardware requirements apply (source):
|Processor||Pentium IV 2.0GHz processor or higher||4 CPUs|
|Network Adapter||100Mpbs NIC||1Gbps NICs|
|Memory Windows Server 2008 R2 64-bit||4GB RAM or higher||At least 10GB RAM for deployments of 50 or more remote desktops|
|Memory Windows Server 2012 R2 64-bit||4GB RAM or higher||At least 10GB RAM for deployments of 50 or more remote desktops|
Here I have installed a Windows Server 2016 VM. We mount the Connection Server ISO and start the installation .exe file. We accept the license agreements, and install the Horizon 7 Standard Server. In my case, I want to use HTML Access so I use that too.
Next we fill in the data recovery password. Be sure to keep it somewhere safe. Then I choose to let the installer update Windows Firewall to open some ports. Followed by authorizing as a Domain Admin.
After the installation, we can access the console through this link:
It will ask for a license. Fill in your license or your trial license.
Here I added a vCenter so I can use it in the next part.
If you would like more information or have any questions, feel free to contact me. There’s also a nice TechZone article that goes a bit more in-depth in the process of this.
See you in the next part!
Welcome to part two of the Horizon 7.11 deployment series. In this short post, we will go over what we will deploy in this series, what it is and in what part it will be deployed.
We will deploy the following:
In part 4, I will be going over a TechZone article and show you how to create a Windows 10 image that can be cloned easily to fit your deployment. For now, I will not go over View Composer. However, later in the series I will, once I’ve deployed a copy of this set-up as a lab. However, due to SSD space issues this may take a while. I may try to balance it in HDDs, but I have to be careful to avoid this insanity.
Currently busy with making the screenshots and writing the posts, they will be coming soon. I’ll see you in the next post.
This post is the first part of a VMware Horizon 7.11 Deployment series. In this first part, we will look into what Horizon is, what it is used for and why you should use it.
As VMware puts it out, which is a great explanation of what Horizon 8 is for; VMware Horizon 7: “simplifies the management and delivery of virtual desktops and apps on-premises, in the cloud, or in a hybrid or multi-cloud configuration through a single platform to end-users. By leveraging complete workspace environment management and optimized for the software-defined data center, Horizon 7 helps IT control, manage, and protect all of the Windows resources end users want, at the speed they expect, with the efficiency business demands.“
The main use case is for VDI. What is VDI? VDI is Virtual Desktop Infrastructure. This means that, for example, employees in your company can connect to a Windows virtual machine which has their corporate applications on it from anywhere where there is an internet connection. From your laptop, tablet, computer, smartphone, Mac, thin client devices and more. This also works with just applications. The two above is just a bit of what VMware Horizon 7 offers, though this is one of the most popular use-cases.
Why would you choose VMware Horizon 7? If you already a vSphere Stack, it integrates very well into this. For example, it leverages the capabilities of VMware vCenter server to easily clone Windows desktops VMs from a template, on an on-demand basis for your employees. After this, further restrictions such as policies can be applied.
Here is another example of why to use Horizon 7:
As you can see, this is many more features compared to most other VDI applications.
In the next part, we will be deploying the requirements for VMware Horizon 7.
I hope that this was useful and see you in the next post.
This is part two of my VMware Cloud Foundation series. In this part, we will be upgrading all components to the latest version. At the time of writing 3.9 is the latest version and currently I am running 3.8.
In part one, we have done a bring-up on our SDDC, creating and deploying the management Workload Domain (WLD).
First what you need to do, is to connect a My VMware account, so it can download the upgrade bundles.
Go to Repository Settings under Administration, and log in with your MyVMware credentials. Once that has been done, the SDDC Manager will look for updates and after a while, under Repository –> Bundles, it will show the available downloads.
I let it download “VMware Cloud Foundation Update 188.8.131.52”. After this is done, which will take a while, I went to Workload Domains under Inventory. Clicked on details of VI, and the on the MGMT domain. Under update/patches, an update is shown. I ran the prechecks, and it failed under a few parts:
vSAN failed with the HCL check, which is understandable seen that this is run nested. I ignored that. VRLI VRSLCM checks also failed, this is because I do not have VRSLCM deployed yet.
After applying the update, I’ve also applied the configuration bundle.
Unfortunately, it is here that the series on Cloud Foundation ends abruptly, as I do not have access to my Cloud Foundation lab server anymore.
If anyone happens to know a place where I could tempiraily get access to one, or anything like that, please email me at michael-at-masterwayz-dot-nl.
For the people who contacted me asked if there is a way to work with Cloud Foundation without the need of a lab server, yes there is. Through VMware Hands on Labs, which allows you to try out VMware products on your browser, there are two labs that you can use to use Cloud Foundation and learn how it works. There is HOL-2044-01-ISM – Modernizing Your Data Center with VMware Cloud Foundation (which is an Interactive Simulation lab as indicated by ISM in the title), and there is a regular lab, HOL-2046-01-HCI- VMware Cloud Foundation – Getting Started.
Thank you for following the series and see you in the next series, which will be about setting up a nested ESXi Lab template.