In Part 1, we reviewed all the different options for virtualizing graphics with Horizon View, now it is time to get our hands dirty. First we decided to go with vSGA instead of vDGA for the ability to vMotion workloads to various hosts if needed. The hosts that we started with needed a RAM upgrade and a SAN connectivity card because we are doubling the load capacity on each host and need to connect to our shared storage. Here are the before and after specs for each host:
- Supermicro X9SRG-F 1U Host
- Intel Xeon E5-2643 @ 3.30 Ghz
- 16GB DDR3-1600Mhz RAM (Upgraded to 32GB)
- Dual Gigabit NICs (MGMT and LAN)
- IPMI Port (For Console Access)
- Intel X540-T2 (Installed for SAN Connectivity)
- USB Flash Drive (ESXi Boot Partition)
- PNY NVIDIA Quadro 4000 2GB Graphics Card
Lets power down the host, physically install the RAM and Intel X540-T2 card and start preparing to install VMware ESXi 5.1 U1. For future releases of View 5.3 and beyond we will install ESXi 5.5 but for now we are staying on View 5.2.
Next we will install ESXi 5.1 U1, this is a straight forward process.
Everyone has a different way of configuring ESXi, some use Host Profiles some don’t. All I will do for now is configure a single MGMT NIC, set the DNS name and disable IPv6. Once I add the host to my cluster, I will run a Host Profile Compliance Check for the remaining settings. Host will reboot, once I have confirmed that ESXi came up properly I’m going to shut it down and move it back into our datacenter. Now for the fun parts!
Now our host is powered up, I can join it to my cluster and start loading the Intel and NVIDIA VIB’s (VMware Install Bundles). I love using PowerCLI when I can so I will deploy ESXi to all of my hosts and get them joined to the cluster and run this fancy command to enable SSH on all Hosts and start the VIB upload process:
Get-Cluster "CLUSTER NAME"| Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}
Time to fire up Putty (or your preferred SSH Client) and WinSCP to transfer your VIB bundles in the /tmp/ folder for each host. Install the VIB packages running the following command in Putty:
esxcli software vib install -d /tmp/software-version-esxi-version-xxxxxx.zip
After I installed the VIB’s I need to reboot each host, start SSH and Putty back in to confirm a few things and get the NVIDIA services started. Verify that the NVIDIA VIB was loaded successfully:
esxcli software vib list | grep NVIDIA
Because these Hosts have only once graphics card I need to tell ESXi to release the card and allow it to be virtualized so I will run this command to find the PCI ID of the Graphics Card:
lspci | grep -i display
I receive the following response:
00:04:00.0 Display controller: NVIDIA Corporation Quadro 4000
Then I will set the ownership flag of the Quadro Card in ESXi:
vmkchdev -v 00:04:00.0
Verify that the xorg service is running, if not then run the following commands:
/etc/init.d/xorg status /etc/init.d/xorg start
Now that everything is set up we can confirm that ESXi has grabbed the Graphics Card and we can start monitoring GPU resources. So let’s see how much Graphics Memory we have reserved for VM’s by invoking the gpuvm command:
It shows that there is 2GB of Video RAM and a VM is reserving some already!
Next, I will run the SMI interface in watch mode to constantly monitor the performance of the Graphics Card this allows me to see how much resources I have available.
watch -n 1 nvidia-smi
Now its time to configure a Pool of VM’s that can utilize the vSGA settings that we configured and see the results of our work in Part 3!
You must be logged in to post a comment.