Lab Update 01/08/2023 (with pictures) (hardware only)
This post serves as a current update on what’s running in my lab, with pictures. Let’s do the front:
We see 9 servers. They are all HP ProLiant servers. From top to bottom: DL360p G8, DL380 G7, DL380p G8, DL360p G8, DL380p G8, DL360p G8, DL360p G8, DL380p G8, DL360 G6.
Here’s a table, in order from top to bottom. The CPU core count per CPU. Their aliases (what I refer to them in future posts are between the ( ):
Type | Use | CPU | Memory |
---|---|---|---|
HP ProLiant DL360p G8 | Spare (unused) | 2x Intel Xeon E5-2620 (6c 12t @ 2.00 GHz) | 64GB |
HP ProLiant DL380 G7 (MMG01) | Personal test hardware server (usually off) | 2x Intel Xeon X5660 (6c 12t @ 2.8 GHz) | 64GB |
HP ProLiant DL380p G8 (TN2) | Shared storage using iSCSI for compute cluster (running TrueNAS) | 1x Intel Xeon E5-2660 v2 (10c 20t @ 2.2 GHz) | 32GB |
HP ProLiant DL360p G8 (ESXi4) | vSphere Cluster Compute Node | 2x Intel Xeon E5-2660 v2 (10c 20t @ 2.2 GHz) | 304GB |
HP ProLiant DL380p G8 (ESXi1) | vSphere Cluster Compute Node | 2x Intel Xeon E5-2660 v2 (10c 20t @ 2.2 GHz) | 320GB |
HP ProLiant DL360p G8 (ESXi2) | vSphere Cluster Compute Node | 2x Intel Xeon E5-2660 v2 (10c 20t @ 2.2 GHz) | 256GB |
HP ProLiant DL360p G8 (ESXi3) | vSphere Cluster Compute Node | 2x Intel Xeon E5-2660 v2 (10c 20t @ 2.2 GHz) | 304GB |
HP ProLiant DL380p G8 (TN) | Shared storage using iSCSI for compute cluster (running TrueNAS) | 1x Intel Xeon E5-2620 v2 (6c 12t @ 2.1 GHz) | 24GB* |
HP ProLiant DL360 G6 (BACKUP01) | Backup Server | 1x Intel Xeon X5660 (6c 12t @ 2.8 GHz) | 36GB |
*: 32GB usually, however currently one of the memory banks is broken
This means that my total vSphere Cluster Capacity is:
CPU: 80 cores / 160 threads
Memory: 1.15TB (or 1184 GB)
Storage: 5.33TB NVMe SSD storage, 2TB of HDDs and about 2TB of SATA SSD storage
Now let’s go to connectivity. The two storage server and the vSphere compute nodes are connected through two 10 gigabit SFP+ ports.
Picture of the rear, and don’t shit your pants over the power delivery, it’s not as bad as it looks*:
*: The two power bars in the rack are fed through their own 230V 16A feed. One extra power bar is connected where low-power devices are plugged in (routers, switches).
Here’s a table of the equipment, excluding the servers. Their aliases (what I refer to them in future posts are between the ( ):
Device | Use |
---|---|
Velocloud 5X0 | Sitting there for labbing purposes, currently unconfigured |
Ubiquiti EdgeRouter 10XP (CORE2) | Router for the iLO network |
Mikrotik RB3011UiAS-RM (CORE1) | Main router for the whole rack |
TP-Link JetStream SX3016F (S1) | Main switch where all servers connect to with two SFP+ ports |
TP-Link TL-SG3428X (S2) | All 1G ports, switch is used for the testing servers, 10G uplink to S1 |
TP-Link TL-SG2216 (S4) | Switch dedicated for the iLO adapters |
CORE1 and CORE2 have a direct link to each other, exchanging routes through BGP.
The reason S4 exists and just runs iLO is just because it happened to be that way historically. I had it set up before I got S2.
Feel free to send any questions by way, in an upcoming post I’ll do the software side of things, and in another post I’ll talk about some cost savings I’m going to perform.
Thanks for reading!