Horizon View & vSGA – Part 3 – Pool Creation and Results

In Part 2, we installed the VIB and configured ESXi to accept the graphics card. Now we need to build a pool of desktops to utilize vSGA and see the results compared to a non vSGA VM. Here’s a detail breakdown of the Pool Settings, I will walk through each step and notate some things I’ve learned about pool creation with regards to vSGA along the way.

  • Pool Type: Automatic
  • User Assignment: Floating
  • vCenter Server: View Composer Linked Clones
  • Pool ID: vSGA_Test_Pool
  • Pool Display Name: vSGA Workstations
  • Pool Description: Pool of Desktops evaluating vSGA Graphics Virtualization
  • General State: Enabled
  • Connection Server Restrictions: None
  • Remote Desktop Power Settings: Take No Power Action
  • Automatically Logoff After Disconnect: Never
  • Allow Users to Reset Their Desktops: Yes
  • Allow Multiple Sessions per User: No
  • Delete or Refresh Desktop Upon Logoff: Refresh Immediately
  • Default Display Protocol: PCoIP
  • Allow User to Choose Protocol: No
  • 3D Renderer: Automatic (512MB)
  • Max Number of Monitors: 2
  • HTML Access: Enabled
  • Adobe Flash Settings: Default
  • Provisioning – Basic Settings: Both Enabled
  • Virtual Machine Naming – Use a Naming Pattern: vSGATest{n:fixed=2}
  • Max Number of Desktops: 8
  • Number of Spare Desktops: 1
  • Minimum Number of Provisioned Desktops during Maintenance: 0
  • Provision Timing: Provision All Desktops Up Front
  • Disposable File Redirection: 20480 MB
  • Replica Disks: Replica and OS will remain together
  • Parent VM: TestGold
  • Snapshot: vSGA Prod SS
  • VM Folder Location: Workstations
  • Cluster: vSGA Cluster
  • Resource Pool: vSGA Cluster
  • Datastores: SynologySSD
  • Use View Storage Accelerator: OS Disks – 7 Days
  • Reclaim Disk Space: 1 GB
  • Blackout Times: 8-17:00 MTWTHF
  • Domain: View Service Account
  • AD Container: AD Path
  • Use QuickPrep: No Options
  • Entitle Users after Wizard Finishes: Yes

From those Pool Settings, there are a few things I want to point out. You must force all sessions to use PCoIP so the Automatic, Hardware and Software options are available. After all the secret sauce to Horizon View is PCoIP!

I set vSGA pools to “Automatic” so if I have other desktops on this cluster of hosts, they aren’t fighting for resources if they aren’t needed. I can relinquish GPU resources for other desktop workloads. The Gold Images we use contain some beefy applications (AutoCAD, Revit, Navisworks, etc) so we like our disposable disks large to handle central model caching and other cached loads to move outside of the persistent disk.

It seems silly but I like when my users can reset their own machine, the delay in help desk resolution depending on the current workload could be minutes, no need to have my users wait on us! For this test I am going to push it to 11 (512MB is overkill for most task workers but our CAD guys have enjoyed the higher GPUs).

Lastly, View Storage Accelerator, is a big help in reclaiming disk space 1GB at a time, I have set the blackout window for normal business hours to protect the SAN from unwanted IOPS spikes. You should see vCenter Notifications at 5:01….it looks like a stock ticker tape!

Now that we have built our Pool, we can entitle our group or users and let them log in and start playing with vSGA enabled virtual desktops.

Our test group of users were impressed with the fluid motion of Google Earth, AutoCAD, Revit and Navisworks. Is it amazing? Yes on the ability to provision multiple workloads to a single GPU and No because it doesn’t get us to that 100% physical experience just yet, is it a step in the right direction for fully virtualizing GPUs? Absolutely! I hope this small 3 part series has been informative, I will be back soon with a 3 part session for vDGA for Horizon View 5.3 but it would be nice to have a walkthrough on how to upgrade to Horizon View 5.3 first…..up next!


Horizon View & vSGA – Part 2 – Install and Configure

In Part 1, we reviewed all the different options for virtualizing graphics with Horizon View, now it is time to get our hands dirty. First we decided to go with vSGA instead of vDGA for the ability to vMotion workloads to various hosts if needed. The hosts that we started with needed a RAM upgrade and a SAN connectivity card because we are doubling the load capacity on each host and need to connect to our shared storage. Here are the before and after specs for each host:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 16GB DDR3-1600Mhz RAM (Upgraded to 32GB)
  • Dual Gigabit NICs (MGMT and LAN)
  • IPMI Port (For Console Access)
  • Intel X540-T2 (Installed for SAN Connectivity)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

Lets power down the host, physically install the RAM and Intel X540-T2 card and start preparing to install VMware ESXi 5.1 U1. For future releases of View 5.3 and beyond we will install ESXi 5.5 but for now we are staying on View 5.2.

Next we will install ESXi 5.1 U1, this is a straight forward process.

Everyone has a different way of configuring ESXi, some use Host Profiles some don’t. All I will do for now is configure a single MGMT NIC, set the DNS name and disable IPv6. Once I add the host to my cluster, I will run a Host Profile Compliance Check for the remaining settings. Host will reboot, once I have confirmed that ESXi came up properly I’m going to shut it down and move it back into our datacenter. Now for the fun parts!

Now our host is powered up, I can join it to my cluster and start loading the Intel and NVIDIA VIB’s (VMware Install Bundles). I love using PowerCLI when I can so I will deploy ESXi to all of my hosts and get them joined to the cluster and run this fancy command to enable SSH on all Hosts and start the VIB upload process:

Get-Cluster "CLUSTER NAME"| Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

Time to fire up Putty (or your preferred SSH Client) and WinSCP to transfer your VIB bundles in the /tmp/ folder for each host. Install the VIB packages running the following command in Putty:

esxcli software vib install -d /tmp/software-version-esxi-version-xxxxxx.zip

After I installed the VIB’s I need to reboot each host, start SSH and Putty back in to confirm a few things and get the NVIDIA services started. Verify that the NVIDIA VIB was loaded successfully:

esxcli software vib list | grep NVIDIA

Because these Hosts have only once graphics card I need to tell ESXi to release the card and allow it to be virtualized so I will run this command to find the PCI ID of the Graphics Card:

lspci | grep -i display

I receive the following response:

00:04:00.0 Display controller: NVIDIA Corporation Quadro 4000

Then I will set the ownership flag of the Quadro Card in ESXi:

vmkchdev -v 00:04:00.0

Verify that the xorg service is running, if not then run the following commands:

/etc/init.d/xorg status
/etc/init.d/xorg start

Now that everything is set up we can confirm that ESXi has grabbed the Graphics Card and we can start monitoring GPU resources. So let’s see how much Graphics Memory we have reserved for VM’s by invoking the gpuvm command:

gpuvm It shows that there is 2GB of Video RAM and a VM is reserving some already!

Next, I will run the SMI interface in watch mode to constantly monitor the performance of the Graphics Card this allows me to see how much resources I have available.

watch -n 1 nvidia-smi

nvidia smi

Now its time to configure a Pool of VM’s that can utilize the vSGA settings that we configured and see the results of our work in Part 3!

Horizon View & vSGA – Part 1 – Intro

Since we have stepped up storage by throwing SSD’s at the problem, we now are on to the next task of getting the best graphics experience to our end users. For too long people have relied on mammoth workstations with big graphics cards to get the job done. We have  purchased several big time workstations to address our CAD and Modeling Teams. Doesn’t that fly in the face of Desktop Virtualization and consolidating all processes to the Datacenter. So the next logical step in the evolution of VDI is to virtualize 3D graphics.

Now that sounds like an easy task in this day and age of Apple’s motto of “it just works”, but imagine the time and dedication it took to create the first hypervisor. CPU and RAM are one thing, specific processing threads rendering graphics is very specific and takes time to perfect. VMware started with SVGA and Soft3D with View 5.0 and they have made major strides in graphics utilization. That being said VMware announced two exiting features in Horizon View 5.2, vSGA and vDGA. I will give a quick summary of both, but for more detailed information head over to Andre Leibovici’s Blog for a full breakdown of what they mean. Here is also the VMware White Paper of the complete line of Graphics Acceleration Deployment.

vSGA – Virtual Shared Graphics Acceleration

horizon view vsga

vSGA gives you the ability to provision multiple VM’s (Linked-Clones or Full VM’s) to single or multiple GPU’s. Graphics cards are presented to the VM as a software video driver and the graphics processes are handled by an ESXi driver (VIB). Graphics resources are reserved on a first come first serve basis so sizing and capacity is important to consider. You can also have various Pool types on a host and not all need graphics, this is important if you have various workstation classifications running in a cluster. vSGA is a great solution for users that require higher than normal graphics needs, rendering 1080p video, Open GL, DirectX, etc. We will get into configuring pools for vSGA but there are 3 options: Automatic, Software and Hardware.

vSGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q
vDGA – Virtual Dedicated Graphics Acceleration

horizon view vdga

vDGA differs from vSGA in that the Physical GPU is assigned to the VM using DirectPath I/O so the full GPU is assigned to a specific machine. vSGA allows multiple VM’s to provision resources from the GPU, with vDGA you install the full NVIDIA GPU Driver Package to the VM and the Graphics Card shows up as hardware in Device Manager. In Horizon View 5.2 vDGA is still in Tech Preview but is full availability with View 5.3. Below is the list of compatible NVIDIA GPU’s for vDGA. There are limitation to vDGA including no Live vMotion, once GPU resources are exhausted on your Host no other VM’s can be powered on, also because of the nature of the NVIDIA driver Full VM’s are required not Linked-Clones or View Composer based VM’s.

vDGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro K2000
  • Quadro K4000
  • Quadro K5000
  • Quadro K6000
  • Quadro 1000M
  • Quadro 2000
  • Quadro 3000M
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q

So now that we have a basic understanding of what vSGA and vDGA mean, we can start to realize the pros and cons of both technologies. For this first dive into virtualized graphics we decided to start with vSGA because of the ability to run other VM’s on it, since we are testing in a lab and not production, right! Our test equipment that was used was a re-purposed Supermicro 1U Server. Full Specs below:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 32GB DDR3-1600Mhz RAM
  • Dual Gigabit NICs (MGMT and LAN)
  • Intel X540-T2 (SAN)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

BOXX Hosts

We will jump into the installation and configuration in Part 2!

SSD’s saved our View Pod

Synology DS3612xs SSDs

I’ve talked with several colleagues in the virtualization arena and one of the things they all say is “VDI is tough, it’s always changing, there is nothing harder than virtualizing desktops!” I have learned this lesson the hard way. Two years ago our company deployed VMware’s VDI solution View (now Horizon View) as a proof on concept (POC) to a group of test users, these users ranged from task workers to advanced users running CPU and Graphics intensive applications. That test group was roughly 10 people, 6 months later we deployed VDI in waves to various departments and grew to over 50 users.

Now before I go any further I want to give you a background of the equipment we used to deploy the POC:

  • Dell Poweredge R620 – Intel Xeon E5-2690 2.9 Ghz, 128GB RAM, (6) 1GB NIC’s
  • HP ProCurve 5412zl L2/L3 Switch
  • Dual Dell PowerConnect 24 Port Gigabit Managed Swtiches (SAN Network)
  • Dell Equalogic PS 6100 (48TB Raw) – Total IOPS – 1300

The POC had been deployed before I joined the company and at the time the VDI experience was very good. But as we continued into production, we started seeing performance hits at random times. I started in April of 2012 and was working in another area of IT but was quickly attracted to the allure of VDI and everything VMware. So in my spare time I started doing research into VDI performance issues, I learned about PCoIP offloading, CPU and RAM issues, sizing Gold Images properly, etc. I threw everything out that I knew and started over with new Gold Images, same performance issues. This all happened over 15 months.

The problem was right in front of us…

Then it occurred to me (read: Google, forums, talking w/ vExperts) that storage was our issue. I started reading everything about Total and Peak IOPS and how it relates to VDI, I started scoring our various Gold Images and discovered that some of our images had Peak IOPS of over 150! Do the math…..the Equalogic that we were running had a peak of 1300 IOPS, at this point we had over 180 users, so do that easy math: 180 users x 25 IOPS (average) = 4500 IOPS!!!!! Houston, we have a problem.

The Solution…sorta

So what did we do? It’s simple but not easy! We realized that as we grew our VDI environment that we improved everything except storage. We upgraded to bigger, more powerful hosts, improved our Core Switch architecture and expanded to larger SAN switches, upgraded our Power and Environmental systems. We did every upgrade except storage. This is not a slight towards our team or myself, we just didn’t have the knowledge and experience to truly understand what we were dealing with in VDI. Getting back to the solution (that is the title of this article right?) we started meeting with and sizing solutions around various vendors and in the meantime I got the idea to buy a Synology NAS load it up with some SSD’s and give us a fairly inexpensive band aid until we can properly implement a permanent storage solution.

In the left corner….Synology DS3612xs


So let’s talk about the Synology DS3612xs because this thing is a beast! I chose this model specifically because of the 12 bay capacity and its ease of transition into our test lab environment (I’m begging my boss to buy it for my Home Lab!) The specs for this thing are really impressive:

  • 12 Drive Bays (Expandable to 36 with Add On Chassis)
  • Intel Core i3 CPU
  • 8GB RAM
  • 4 1GB NICs
  • Available PCIe bay (did someone say 10GB?)
  • vSphere 5 support with VAAI
  • SSD TRIM Support
  • Synology Awesomesauce DSM operating system
In the right corner….Intel 520 Series SSD and 10GB Fiber

I went with Intel 520 Series 480GB Solid State Discs because of the reliability, cost and Total IOPS count (42,000 Read/50,000 Write). Because of the Peak IOPS burst, I have heard horror stories about running SSD’s over 1GB so I wanted to have a nice big pipe to our SAN network, I went with a Intel SFP card that supports 10GB fiber. This fit perfectly into our SAN switches and was excited to get everything put together!

Did it fix the IOPS issue?

Yes it has! But that was its intention all along. We took the time, did the research and assembled a reasonable budget and solution that could solve an immediate crisis for our end users. Is it a permanent solution? Absolutely not! But we have seen an immediate performance improvement across the board, from recomposes, pool creation, to end user UI improvements, it has been really nice to finally know but to understand the problem.

The next steps?

Now that we have our band aid we can focus on our permanent storage solution. I am really excited to start working with various vendors and stand up some POC’s to see how the various solutions work with our systems and processes. Until then I get a lot of joy watching the performance metrics every morning during login storms go smoothly. Clone VM’s in seconds as opposed to 90 minutes! I will update this article as I can with some specific performance charts. But for now I am getting ready for our next set of problems after storage….virtualizing graphics. But isn’t that why we are doing this, to learn, understand, solve problems and make things better? I know I am!

Install and Configure Teradici APEX 2800 Offload Card


What is the APEX 2800 Offload Card?

The Teradici APEX 2800 is a PCoIP Offload card for your compute nodes. What this means is when PCoIP traffic is detected on your nodes, each node that has a properly configured APEX card installed, the PCoIP software encoding compute cycles can be offloaded (read: dynamically moved) to this card. This benefits the amount of VDI machines (more importantly your displays)  from being crunched by the processors to produce visuals.

Is there a benefit? Absolutely! We have roughly 40-75 Task Worker profiles on each host, that’s 80-150 displays being crunched by the processor on top of Windows, Office and other applications. That is a big load on the processor, by moving the PCoIP processing onto the APEX we can save time and resources for those items that really need the speed of the Xeon platform. For the current firmware only 100 displays can be offloaded, still that is a ton of compute processing saved by moving to this card!

Install the APEX 2800

So now that we understand what it does, let’s get one physically installed into a host and then install the VIB file and enable the offloading.

  1. If this Host is in production, move all powered on/off VM’s to another available host.
  2. Put Host in Maintenance Mode, Shutdown Host, disconnect all cables (power, etc)
  3. Install APEX 2800 card to available PCIe slot (document slot location, try and make location consistent for all hosts in cluster)
  4. Reconnect cables and power Host up
  5. Confirm Host is at DCUI ready screen
  6. Enable SSH on Host, leave in Maintenance Mode
Install APEX 2800 Drivers into ESXi

For this part make sure you have a SSH client like Putty and a SFTP client like WinSCP to transfer the VIB package to your host. Go to Teradici My Support (requires login) to download the latest verified VIB package for your version of Horizon View (we will be using the package for View 5.2) Now that we have all of our stuff, lets load up the VIB package!

  1. SSH has been enabled from the prior steps, SSH into host
  2. Fire up WinSCP and start a session to the host.
  3. In WinSCP browse to the location where you downloaded the ZIP package (i.e. Downlads) on the left pane, on the right pane browse to /tmp/ and move that package in that folder
  4. Back in Putty lets verify the file is there by entering this command: “cd tmp” then “ls” you should now see the .zip file in that folder
  5. To install the VIB package enter this command: ” esxcli software vib install -d /tmp/apex2800-version-esxi-versionxxxxxx.zip” hit enter
  6. After installation it will spit out an install summary, if you had an existing version that was upgraded it will tell you here
  7. Close WinSCP and Putty and reboot the Host from vSphere
  8. When the Host become available again, enable SSH and exit Maintenance Mode
Install APEX 2800 Drivers for Windows

At this point we have physically installed the APEX 2800 card, installed the VIB package to ESXi, the last piece is to install the OS aware agent .  Go to Teradici My Support (requires login) to download the latest version of the OS Agent. Let’s get started!

  1. The VM can be on the Host we just installed the VIB package on.
  2. On your VM locate the OS Agent download and run the “apex2800-version-rel-xxxxx.exe” install package.
  3. Next, next, next!
  4. Finish and Reboot
  5. Time to verify the PCoIP processing is being offloaded!
APEX 2800 Commandlets

Each of these commands can be run from a SSH session, so fire up Putty and let’s verify that it’s working!

View APEX 2800 Status

/opt/teradici/pcoip-ctrl -I

View VM Usage and Monitoring Status

/opt/teradici/pcoip-ctrl -V

Enable/Disable APEX 2800

/opt/teradici/pcoip-ctrl -d <device number> -e
/opt/teradici/pcoip-ctrl -d <device number> -x

Displaying VM Property Values

/opt/teradici/pcoip-ctrl -O

All APEX2800 Commandlets

VDI Performance – Part 1 – Gold Images

To build a house you have to start with a solid foundation, I hate that saying but it’s true. It applies to VDI just the same: to have an efficient VDI environment you MUST have consistent gold images. The term gold image is what I use to base all of my View Pools on. I have a 3 step process to building my gold images:

  1. Create OS Template
  2. Create Gold Image
  3. Install Core Company/Pool Specific Software

Before I start with building my Pool Specific Gold Images I start with a Base Operating System Template for vCenter, this allows me to clone for Gold Images, Dedicated VM’s or Test Boxes with all specific Tools, Agents and other items that must be installed for a View Environment. This is the most important step in the process because this truly is the foundation for all future Gold Images in my environment. Let’s get to work!

Create OS Template

Create New Virtual Machine

In vSphere I create a new virtual machine called “Win7Gold”, I select “Custom” configuration with the following specs:

  • Name your VM
  • Choose an Inventory Location
  • Choose Host/Cluster Location
  • Choose Resource Pool (If DRS is enabled)
  • Select a Storage Resouce (Local or Shared Storage)
  • Virtual Machine Version (since I am running ESXi 5.5 I choose the latest)
  • Guest OS Version (For this example I am using Windows 7 x64)
  • CPU Sockets/Cores – 2vCPU
  • Memory – 3GB (View Best Practices say 3GB minimum)
  • Network – Choose Network and Adapter – VMXNet3
  • SCSI Controller – LSI Logic SAS
  • Create a new virtual disk (unless importing from an existing disk)
  • HDD – 32GB Thin Provision (This is a template, no need to build out the whole disk)

vCenter will now build out the base VM, time to edit some hardware!

  • Remove Floppy (You can always add to a specific Gold Image later if needed)
  • Select “Options” tab and review the Boot Options
    • Select “Force BIOS Setup”
  • Power on VM and Open VM Console
  • VM will boot into BIOS, select Advanced Tab
  • Disable all of those ports (You can always add to a specific Gold Image later if needed)

BIOS Disable Peripherals

  • Save and Exit
  • Back in vCenter – Edit VM Settings and Mount OS Media ISO (Make sure to check “Connected” and “Connect at Power On” and install your Operating System

Win 7 Install

  • Once Windows has finished installing we need to install some VMware packages before anything else because of conflicts with .Net Framework and View Agent


Next step is to optimize Windows for Horizon View, VMware has a great guide for optimizing including a script to run, I am a little more visual and discovered VMware Flings. Flings has a great application called VMware OS Optimization Tool that can automate a lot of the tweaking for your OS Template, so let’s use it to tweak!


Once the Fling is installed let’s run an analysis and optimize everything except: Windows Firewall (for VMware Blast), Windows Update (latest and greatest updates) and Windows Search. Notice how only 4 are left? Everyone’s tweaks will be different but there are recommended and optional ones you can review!

Once you have optimized with Flings uninstall the OS Optimization Tool and lets get Microsoft Updates installed.


Once Windows gets all of it’s updates (I don’t install IE10) There is one last update that we need to install related to the VMXNet3 driver (KB2550978). After this you can go into Services and disable Windows Update. Once that is disabled we can delete the update leftovers from



After Windows Updates are finished there are some Windows Features that we don’t need use, lets remove them (optional) :

  • Games
  • Windows DVD Maker
  • Windows Media Center
  • Internet Printing Client
  • Windows Fax and Scan
  • Tablet PC components
  • Windows Gadget Platform
  • XPS Services
  • XPS Viewer

Last item before Sysprep is to erase all event logs, you can do this by running this script in an elevated command prompt:

for /F "tokens=*" %1 in ('wevtutil.exe el') DO wevtutil.exe cl "%1"

Let’s get ready to Sysprep our OS Template and convert to a VM Template for our Gold Images.


Run sysprep.exe at choose Generalize, OOBE, Shutdown


Right Click on VM and Choose Template > Convert to Template. Now that we have converted our OS Template into a VM Template we can proceed to cloning our template to build a Gold Image for View Pools. What’s nice about building a VM Template as opposed to moving to a Gold Image to start with is that now I can deploy a quick Test machine or build out Dedicated VM’s without affecting any other images. Also if you use vCloud Director, it will require having images as Templates for provisioning purposes.

Create Gold Images

Now that we have our OS Template built we can clone it to make a Gold Image. We maintain several Gold Images based on scope of work or department. Eventually we will move away from so many Gold Images and start utilizing Horizon Mirage (v2 now supports VDI). Until then let’s build out a Gold Image for our Accounting Department.

In vCenter let’s clone our Win7Template to make AccountingGold.


Each of our View Pools has specific hardware requirements based on the software they use, so let’s make some changes to address some accounting software that we will install. I will enlarge the hard drive to 72GB and increase RAM to 4GB.


Since we ran Sysprep on the OS Template VM we need to go through the basic Windows Install steps of the new Gold Image. Next join the machine to the domain and reboot. Some people say to join the domain last, but it has been my experience that most software packages are located on a central file server and it makes things easier to authenticate to AD and if you are trying to configure a DSN, SQL or AD specific application.


I like to tweak the display properties at this point by going to System Properties > Advanced System Settings > Advanced > Performance > Settings > Adjust for Best Performance. You can also set the Windows Theme to Windows Classic.

Install Core Company/Pool Specific Software

From here the sky is the limit on what you want to install for your Gold Images, we typically install MS Office, Flash, Chrome, Java, etc. For all applications we install, we always do a first run to ensure everything installed correctly. If you install MS Office re-enable Windows Update service and update Office (Internet or WSUS since we are in the Domain now). Once complete, disable Windows Update service and delete updates from:


So let’s assume you have installed everything that you want and are ready to provision this image for use by a View Pool, there are several things you want to tweak to make the linked-clone process go smoothly.

SDelete is a slack utility for Windows that will remove any slack in Windows partitions, since we are running our VM’s in Thin Provision I like to make sure my Gold Images are as compact as possible when the Replica’s are produced. You will need to download a copy of SDelete here. To start let’s run a check disk for the system partition:

chkdsk /f

Now we can run sdelete from a run command. NOTE: this will take some time

sdelete -z

Now lets put the Host that our Gold Image is running on into SSH mode. Shutdown your Gold Image and open Putty or your preferred SSH client.

cd vmfs
cd volumes
cd SPECIFIC DATASTORE (Case Sensitive)
cd SPECIFIC VM (Case Sensitive)
vmkfstools -K SPECIFIC VMDK FILE (Case Sensitive)

Hole Punching should take a while but will eventually finish.

Once that is complete, start your VM back up and run chkdsk /f one more time. Also be sure to disable SSH in vCenter. Clear the event logs one more time.

Now let’s build our shutdown script and we will be done with our Gold Image. We use this script to release the Windows KMS key, MS Office KMS Rearm, IP release and shutdown. Create a txt file on the desktop called “script.bat” right-click on bat file and edit, insert the following text and save:

"C:\Program Files (x86)\Common Files\microsoft shared\OfficeSoftwareProtectionPlatform\OSPPREARM"
 slmgr /ckms
 ipconfig /release
 ipconfig /flushdns
 shutdown -s -t 00

Run the script.bat file and your machine is now 100% ready to become a Parent Gold Image for a View Pool, you can take a snapshot and start provisioning a View Pool!


Up next in our 6 Part series is Part 2 – PCoIP Best Practices

VDI Performance Series Index

Part 1 – Gold Images

Part 2 – PCoIP Best Practices

Part 3 – Persistent vs Non-Persistent

Part 4 – Storage

Part 5 – End User Experience

Part 6 – Wrap Up and What’s Next

DFW VMUG User Conference Download

DFW Opening Slide Deck

DFW VMUG held it’s annual User Conference September 25 at the Irving Convention Center and it was a blast. It was great to see so many people that are a part of the VMware community come together and geek out. I even had some of my non-VM coworkers come and learn about some new software and hardware at the Vendor Crawl. It was cool for them to see the journey VMware is on.

13 - 3

Also got a chance to meet up with some of the DFW VMUG members (Nigel, John, Brad, Matt and Me) The keynotes were well prepared and the speakers for the breakout sessions had some great material to present. I couldn’t get to all of them but I attended 4 sessions and wanted to do a session download, we do this at our company anytime someone goes to a conference, it helps to keep everyone informed:

Evolving to the Software-Defined Data Center: EMC Integrations                     – Tommy Trudgen

Tommy Trudgen (vTexan) from EMC gave a great presentation showcasing the evolution of Virtualization and how EMC is helping push the boundaries of the Software Defined Data Center. He illustrated the highs and lows of EMC and how they capitalized emerging software and technologies to keep them moving forward and a leader in the storage market.

VSAN Technical Best Practices                                                                                           – Dan Gillcrist

By the time VSAN had been announced at VMworld, I wasn’t able to get signed up any sessions. I wasn’t going to waste time with Hands On Labs at VMworld so I was really pumped to do a deep dive on the subject. For those that aren’t familiar with VSAN it is VMware’s version of SAN independence and storage convergence. The best example I can give on convergence is Google’s Server Clusters, by storing spinning drives at the nodes instead of centrally storing in a SAN, they couple all the disks into logical arrays. By doing this, you can scale storage and performance on demand as opposed to sizing a SAN prior to implementation and then adding shelves to handle growth, be it expected or not.

That was the quick version, for a more detailed assessment of what VSAN is, check out Duncan’s article at Yellow Bricks, lets dive into some best practices for VSAN.

Automation Generation                                                                                                       – Nick Weaver

Automation is a concept that I am trying to learn more about, this session threw gas on the fire for me. PowerCLI and PowerShell is the start to making vCenter commands more automated, but it doesn’t automate processes, in comes Puppet Labs. Puppet Labs is an automation platform (Purchase and Open Source Models) that the vCloud and vSphere teams use to automate provisioning of Application stacks and other management intensive processes. VMware also makes available all of the Puppet Lab scripts that they use on GitHub.

Think about the last time you implemented a software solution, what was the first thing you did, white papers right? Our company implemented an ERP platform and our install process was over 200 steps just for the servers! Now think about spitting out some code and clicking run….yup! To play with Puppet Labs start with something simple like the vCenter Server Appliance and configure IP, DNS, hostname, etc. In very simple terms, that’s Puppet Labs, check them out and if you have any questions, give Nick Weaver a tweet, also check out another project works on: Project Zombie.

The Emerging Technologies That Will Change VDI                                                   – Mark Vaughn

If you are part of the SLED market (State, Local and Education) then you have probably heard of Presidio, they have implemented VDI solutions in 1/3 of the Nations Colleges that is no small feat. Mark Vaughn gave a compelling presentation of common misconceptions with VDI, areas to avoid, lessons learned and what we can expect from VDI in the near future and how VMware is leading the industry in virtualizing desktops and offering Desktops-as-a-Service (DaaS).

Troubleshooting Storage Performance                                                                         – Brad Pinkston

Storage performance is on every admin’s mind, unless you have a Flash array! Brad gave a high level overview of how storage has evolved with the releases of vSphere 5.5, VSAN, and vFlas Read Cache. I got some good nuggets of information when running esxtop and understanding averages vs peaks, here are some valuable slides on esxtop info.

Disk IO Latencies Disk IO Queues esxtop explained Storage Best PracticesOn top of those valuable slides, he also recommended some tools that can be used to diagnose storage problems and help isolate the problem:

All in all it was a great User Conference, this was my first and will not be my last. Up next VMworld Barcelona is going on right now and then later in October VMworld on the Road is coming also, check those out if you can!

VDI Performance Series – Let’s Do This

What is all this talk about VDI?

VDI is a virtual desktop infrastructure designed to provision desktops as a service to the end user. The desktops are stored in the datacenter typically using a Host – Shared Storage scenario. End Users can connect to these Virtual Desktops using Hardware thin/zero client endpoints or a variety of software clients on PC, Mac, iOS, or Android. You can do a P2V for existing machines or build out “Pools” of desktops for specific use cases or by departments.

VMware’s VDI solution is called Horizon View, which is now part of the Horizon Suite. There are various components that make up Horizon View, I’m not going to go into that here because we are focusing primarily on VDI performance.  I’ve included a diagram of what a typical VDI environment would look like.


Our company made the decision to deploy VMware’s VDI solution Horizon View (as of 5.2) about 18 months ago. The adoption rate has been incredible, we began with a small test group of 15 users, mainly administrative, task workers and some advanced workers. Our test group rapidly became our production group as we went from 15 to over 100 in less than 6 months.

Fast forward to today and we have almost 200 desktops deployed on View and there isn’t any sign of us slowing down, we have about 80% of our staff on VDI and the other remaining users begging for the “black box”. They just flip out when they can swtich from a Zero Client station to their iPad to a conference room and back again!

With fast growth comes scaling problems, and we have encountered our fair share of them. First it was battery backup, then it was inadequate cooling, now it is performance. VMware has some great articles about industry Best Practices when deploying View, but there is much more information out there, some from VMware and not.

My goal is to lay out the hurdles that we have overcome with VDI performance, the lessons I’ve learned and what we are planning on doing in the future.

Part 1 – Gold Images

Part 2 – PCoIP Best Practices

Part 3 – Persistent vs Non-Persistent

Part 4 – Storage

Part 5 – End User Experience

Part 6 – Wrap Up and What’s Next

Look for the 6 Parts over the next several weeks.