VCP6-DT Exam is released!

On December 4th, VMware announced the new VMware Certified Professional – Horizon View 6 Desktop Exam.

VCP-DT

This is very similar to the VCP5-DT exam but with some additional content surrounding the entire Horizon View suite like Mirage and Workspace.

  • creating and administering virtual desktops
  • configuring and administering View, Mirage and the Workspace Portal
  • configuring host networking and storage
  • working with DRS, HA, and other cluster-related vSphere features

Having already passed the VCP5-DT, I am really excited to get to studying the VCP6-DT Exam Blueprint (which you can find here). Below you will find the various paths that are required to be eligible for the exam.

Screen Shot 2014-12-10 at 9.39.10 AM

Cannot Uninstall View Agent – Persona Management Enabled

Screen Shot 2014-04-15 at 10.48.02 AM (2)

This morning I had an issue related to a full VM that required a uninstall/reinstall of the Horizon View Agent 5.2, we have been evaluating Persona Management and Folder Redirection on some View Pools. I typically follow these steps when performing a uninstall/reinstall of the View Agent:

  1. Put VM in Maintenance Mode from View Administrator Console
  2. Log in as Domain Admin (or elevated user above normal) to VM
  3. Control Panel > Add/Remove Programs > Horizon View Experience Agent > Uninstall
  4. Control Panel > Add/Remove Programs > VMware View Agent > Uninstall
  5. Shutdown VM
  6. Power On VM
  7. Install Horizon View Experience Agent
  8. Reboot
  9. Install VMware Horizon View Agent
  10. Reboot
  11. Exit Maintenance Mode from View Administrator Console
  12. Confirm Agent is “Available”

So I began going through my 12 step process to repair a bad instance of the View Agent and came across an error I hadn’t seen before:

Screen Shot 2014-04-15 at 10.46.03 AM

I thought it strange that it wouldn’t let me uninstall the Agent when I wasn’t logged into the VM from a Persona Management enabled user. Upon further inspection I discovered that the Persona Management Windows Service was enabled and running. So now I have added a few steps to my uninstall/reinstall process as follows:

  1. Put VM in Maintenance Mode from View Administrator Console
  2. Log in as Domain Admin (or elevated user above normal) to VM
  3. Access services.msc – Stop VMware Persona Management Service and Disable Service
  4. Reboot VM and Log in as Domain Admin
  5. Control Panel > Add/Remove Programs > Horizon View Experience Agent > Uninstall
  6. Control Panel > Add/Remove Programs > VMware View Agent > Uninstall
  7. Shutdown VM
  8. Power On VM
  9. Install Horizon View Experience Agent
  10. Reboot
  11. Install VMware Horizon View Agent
  12. Reboot
  13. Confirm that VMware Persona Management Service is set to “Automatic” and running
  14. Exit Maintenance Mode from View Administrator Console
  15. Confirm Agent is “Available”

Persona Management is a great tool to protect user data across floating pools for an even better end user experience. Hope this helps someone out, it made me scratch my head for a few minutes before I figured out what needed to be addressed.

Horizon View 5.3.1 is here with VSAN support!

Now that VSAN has become GA (General Availability) it makes sense that VMware will start pushing updates of it’s software offerings with VSAN support, Horizon View is no different!

What’s New in Horizon View 5.3.1

  • Requires vSphere 5.5.0 Update 1 or newer
  • Support for VSAN
  • 100 VM’s per Host using VSAN

Below is a list of links for all the vPieces to make your View VSAN environment come alive (some links require My VMware account to access):

vSphere and vCenter 5.5.0 Update 1

ESXi 5.5.0 Update 1

ESXi 5.5.0 Update 1 Readme

vSphere Client 5.5.0 Update 1

vCenter Server 5.5.0 Update 1 – Windows Instance

vCenter Server 5.5.0 Update 1 – Virtual Appliance

Horizon View 5.3.1 Feature Pack 1

Remote Experience Agent for 32-bit desktops

Remote Experience Agent for 64-bit desktops

HTML Access Web Portal installer

GPO bundle file

What’s Next?

I will be releasing my complete walkthrough of Horizon View 5.3 so stay tuned!

Horizon View & vSGA – Part 3 – Pool Creation and Results

In Part 2, we installed the VIB and configured ESXi to accept the graphics card. Now we need to build a pool of desktops to utilize vSGA and see the results compared to a non vSGA VM. Here’s a detail breakdown of the Pool Settings, I will walk through each step and notate some things I’ve learned about pool creation with regards to vSGA along the way.

  • Pool Type: Automatic
  • User Assignment: Floating
  • vCenter Server: View Composer Linked Clones
  • Pool ID: vSGA_Test_Pool
  • Pool Display Name: vSGA Workstations
  • Pool Description: Pool of Desktops evaluating vSGA Graphics Virtualization
  • General State: Enabled
  • Connection Server Restrictions: None
  • Remote Desktop Power Settings: Take No Power Action
  • Automatically Logoff After Disconnect: Never
  • Allow Users to Reset Their Desktops: Yes
  • Allow Multiple Sessions per User: No
  • Delete or Refresh Desktop Upon Logoff: Refresh Immediately
  • Default Display Protocol: PCoIP
  • Allow User to Choose Protocol: No
  • 3D Renderer: Automatic (512MB)
  • Max Number of Monitors: 2
  • HTML Access: Enabled
  • Adobe Flash Settings: Default
  • Provisioning – Basic Settings: Both Enabled
  • Virtual Machine Naming – Use a Naming Pattern: vSGATest{n:fixed=2}
  • Max Number of Desktops: 8
  • Number of Spare Desktops: 1
  • Minimum Number of Provisioned Desktops during Maintenance: 0
  • Provision Timing: Provision All Desktops Up Front
  • Disposable File Redirection: 20480 MB
  • Replica Disks: Replica and OS will remain together
  • Parent VM: TestGold
  • Snapshot: vSGA Prod SS
  • VM Folder Location: Workstations
  • Cluster: vSGA Cluster
  • Resource Pool: vSGA Cluster
  • Datastores: SynologySSD
  • Use View Storage Accelerator: OS Disks – 7 Days
  • Reclaim Disk Space: 1 GB
  • Blackout Times: 8-17:00 MTWTHF
  • Domain: View Service Account
  • AD Container: AD Path
  • Use QuickPrep: No Options
  • Entitle Users after Wizard Finishes: Yes

From those Pool Settings, there are a few things I want to point out. You must force all sessions to use PCoIP so the Automatic, Hardware and Software options are available. After all the secret sauce to Horizon View is PCoIP!

I set vSGA pools to “Automatic” so if I have other desktops on this cluster of hosts, they aren’t fighting for resources if they aren’t needed. I can relinquish GPU resources for other desktop workloads. The Gold Images we use contain some beefy applications (AutoCAD, Revit, Navisworks, etc) so we like our disposable disks large to handle central model caching and other cached loads to move outside of the persistent disk.

It seems silly but I like when my users can reset their own machine, the delay in help desk resolution depending on the current workload could be minutes, no need to have my users wait on us! For this test I am going to push it to 11 (512MB is overkill for most task workers but our CAD guys have enjoyed the higher GPUs).

Lastly, View Storage Accelerator, is a big help in reclaiming disk space 1GB at a time, I have set the blackout window for normal business hours to protect the SAN from unwanted IOPS spikes. You should see vCenter Notifications at 5:01….it looks like a stock ticker tape!

Now that we have built our Pool, we can entitle our group or users and let them log in and start playing with vSGA enabled virtual desktops.

Our test group of users were impressed with the fluid motion of Google Earth, AutoCAD, Revit and Navisworks. Is it amazing? Yes on the ability to provision multiple workloads to a single GPU and No because it doesn’t get us to that 100% physical experience just yet, is it a step in the right direction for fully virtualizing GPUs? Absolutely! I hope this small 3 part series has been informative, I will be back soon with a 3 part session for vDGA for Horizon View 5.3 but it would be nice to have a walkthrough on how to upgrade to Horizon View 5.3 first…..up next!

Horizon View & vSGA – Part 2 – Install and Configure

In Part 1, we reviewed all the different options for virtualizing graphics with Horizon View, now it is time to get our hands dirty. First we decided to go with vSGA instead of vDGA for the ability to vMotion workloads to various hosts if needed. The hosts that we started with needed a RAM upgrade and a SAN connectivity card because we are doubling the load capacity on each host and need to connect to our shared storage. Here are the before and after specs for each host:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 16GB DDR3-1600Mhz RAM (Upgraded to 32GB)
  • Dual Gigabit NICs (MGMT and LAN)
  • IPMI Port (For Console Access)
  • Intel X540-T2 (Installed for SAN Connectivity)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

Lets power down the host, physically install the RAM and Intel X540-T2 card and start preparing to install VMware ESXi 5.1 U1. For future releases of View 5.3 and beyond we will install ESXi 5.5 but for now we are staying on View 5.2.

Next we will install ESXi 5.1 U1, this is a straight forward process.

Everyone has a different way of configuring ESXi, some use Host Profiles some don’t. All I will do for now is configure a single MGMT NIC, set the DNS name and disable IPv6. Once I add the host to my cluster, I will run a Host Profile Compliance Check for the remaining settings. Host will reboot, once I have confirmed that ESXi came up properly I’m going to shut it down and move it back into our datacenter. Now for the fun parts!

Now our host is powered up, I can join it to my cluster and start loading the Intel and NVIDIA VIB’s (VMware Install Bundles). I love using PowerCLI when I can so I will deploy ESXi to all of my hosts and get them joined to the cluster and run this fancy command to enable SSH on all Hosts and start the VIB upload process:

Get-Cluster "CLUSTER NAME"| Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

Time to fire up Putty (or your preferred SSH Client) and WinSCP to transfer your VIB bundles in the /tmp/ folder for each host. Install the VIB packages running the following command in Putty:

esxcli software vib install -d /tmp/software-version-esxi-version-xxxxxx.zip

After I installed the VIB’s I need to reboot each host, start SSH and Putty back in to confirm a few things and get the NVIDIA services started. Verify that the NVIDIA VIB was loaded successfully:

esxcli software vib list | grep NVIDIA

Because these Hosts have only once graphics card I need to tell ESXi to release the card and allow it to be virtualized so I will run this command to find the PCI ID of the Graphics Card:

lspci | grep -i display

I receive the following response:

00:04:00.0 Display controller: NVIDIA Corporation Quadro 4000

Then I will set the ownership flag of the Quadro Card in ESXi:

vmkchdev -v 00:04:00.0

Verify that the xorg service is running, if not then run the following commands:

/etc/init.d/xorg status
/etc/init.d/xorg start

Now that everything is set up we can confirm that ESXi has grabbed the Graphics Card and we can start monitoring GPU resources. So let’s see how much Graphics Memory we have reserved for VM’s by invoking the gpuvm command:

gpuvm It shows that there is 2GB of Video RAM and a VM is reserving some already!

Next, I will run the SMI interface in watch mode to constantly monitor the performance of the Graphics Card this allows me to see how much resources I have available.

watch -n 1 nvidia-smi

nvidia smi

Now its time to configure a Pool of VM’s that can utilize the vSGA settings that we configured and see the results of our work in Part 3!

Horizon View & vSGA – Part 1 – Intro

Since we have stepped up storage by throwing SSD’s at the problem, we now are on to the next task of getting the best graphics experience to our end users. For too long people have relied on mammoth workstations with big graphics cards to get the job done. We have  purchased several big time workstations to address our CAD and Modeling Teams. Doesn’t that fly in the face of Desktop Virtualization and consolidating all processes to the Datacenter. So the next logical step in the evolution of VDI is to virtualize 3D graphics.

Now that sounds like an easy task in this day and age of Apple’s motto of “it just works”, but imagine the time and dedication it took to create the first hypervisor. CPU and RAM are one thing, specific processing threads rendering graphics is very specific and takes time to perfect. VMware started with SVGA and Soft3D with View 5.0 and they have made major strides in graphics utilization. That being said VMware announced two exiting features in Horizon View 5.2, vSGA and vDGA. I will give a quick summary of both, but for more detailed information head over to Andre Leibovici’s Blog for a full breakdown of what they mean. Here is also the VMware White Paper of the complete line of Graphics Acceleration Deployment.

vSGA – Virtual Shared Graphics Acceleration

horizon view vsga

vSGA gives you the ability to provision multiple VM’s (Linked-Clones or Full VM’s) to single or multiple GPU’s. Graphics cards are presented to the VM as a software video driver and the graphics processes are handled by an ESXi driver (VIB). Graphics resources are reserved on a first come first serve basis so sizing and capacity is important to consider. You can also have various Pool types on a host and not all need graphics, this is important if you have various workstation classifications running in a cluster. vSGA is a great solution for users that require higher than normal graphics needs, rendering 1080p video, Open GL, DirectX, etc. We will get into configuring pools for vSGA but there are 3 options: Automatic, Software and Hardware.

vSGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q
vDGA – Virtual Dedicated Graphics Acceleration

horizon view vdga

vDGA differs from vSGA in that the Physical GPU is assigned to the VM using DirectPath I/O so the full GPU is assigned to a specific machine. vSGA allows multiple VM’s to provision resources from the GPU, with vDGA you install the full NVIDIA GPU Driver Package to the VM and the Graphics Card shows up as hardware in Device Manager. In Horizon View 5.2 vDGA is still in Tech Preview but is full availability with View 5.3. Below is the list of compatible NVIDIA GPU’s for vDGA. There are limitation to vDGA including no Live vMotion, once GPU resources are exhausted on your Host no other VM’s can be powered on, also because of the nature of the NVIDIA driver Full VM’s are required not Linked-Clones or View Composer based VM’s.

vDGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro K2000
  • Quadro K4000
  • Quadro K5000
  • Quadro K6000
  • Quadro 1000M
  • Quadro 2000
  • Quadro 3000M
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q

So now that we have a basic understanding of what vSGA and vDGA mean, we can start to realize the pros and cons of both technologies. For this first dive into virtualized graphics we decided to start with vSGA because of the ability to run other VM’s on it, since we are testing in a lab and not production, right! Our test equipment that was used was a re-purposed Supermicro 1U Server. Full Specs below:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 32GB DDR3-1600Mhz RAM
  • Dual Gigabit NICs (MGMT and LAN)
  • Intel X540-T2 (SAN)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

BOXX Hosts

We will jump into the installation and configuration in Part 2!

SSD’s saved our View Pod

Synology DS3612xs SSDs

I’ve talked with several colleagues in the virtualization arena and one of the things they all say is “VDI is tough, it’s always changing, there is nothing harder than virtualizing desktops!” I have learned this lesson the hard way. Two years ago our company deployed VMware’s VDI solution View (now Horizon View) as a proof on concept (POC) to a group of test users, these users ranged from task workers to advanced users running CPU and Graphics intensive applications. That test group was roughly 10 people, 6 months later we deployed VDI in waves to various departments and grew to over 50 users.

Now before I go any further I want to give you a background of the equipment we used to deploy the POC:

  • Dell Poweredge R620 – Intel Xeon E5-2690 2.9 Ghz, 128GB RAM, (6) 1GB NIC’s
  • HP ProCurve 5412zl L2/L3 Switch
  • Dual Dell PowerConnect 24 Port Gigabit Managed Swtiches (SAN Network)
  • Dell Equalogic PS 6100 (48TB Raw) – Total IOPS – 1300

The POC had been deployed before I joined the company and at the time the VDI experience was very good. But as we continued into production, we started seeing performance hits at random times. I started in April of 2012 and was working in another area of IT but was quickly attracted to the allure of VDI and everything VMware. So in my spare time I started doing research into VDI performance issues, I learned about PCoIP offloading, CPU and RAM issues, sizing Gold Images properly, etc. I threw everything out that I knew and started over with new Gold Images, same performance issues. This all happened over 15 months.

The problem was right in front of us…

Then it occurred to me (read: Google, forums, talking w/ vExperts) that storage was our issue. I started reading everything about Total and Peak IOPS and how it relates to VDI, I started scoring our various Gold Images and discovered that some of our images had Peak IOPS of over 150! Do the math…..the Equalogic that we were running had a peak of 1300 IOPS, at this point we had over 180 users, so do that easy math: 180 users x 25 IOPS (average) = 4500 IOPS!!!!! Houston, we have a problem.

The Solution…sorta

So what did we do? It’s simple but not easy! We realized that as we grew our VDI environment that we improved everything except storage. We upgraded to bigger, more powerful hosts, improved our Core Switch architecture and expanded to larger SAN switches, upgraded our Power and Environmental systems. We did every upgrade except storage. This is not a slight towards our team or myself, we just didn’t have the knowledge and experience to truly understand what we were dealing with in VDI. Getting back to the solution (that is the title of this article right?) we started meeting with and sizing solutions around various vendors and in the meantime I got the idea to buy a Synology NAS load it up with some SSD’s and give us a fairly inexpensive band aid until we can properly implement a permanent storage solution.

In the left corner….Synology DS3612xs

DS3612xs

So let’s talk about the Synology DS3612xs because this thing is a beast! I chose this model specifically because of the 12 bay capacity and its ease of transition into our test lab environment (I’m begging my boss to buy it for my Home Lab!) The specs for this thing are really impressive:

  • 12 Drive Bays (Expandable to 36 with Add On Chassis)
  • Intel Core i3 CPU
  • 8GB RAM
  • 4 1GB NICs
  • Available PCIe bay (did someone say 10GB?)
  • vSphere 5 support with VAAI
  • SSD TRIM Support
  • Synology Awesomesauce DSM operating system
In the right corner….Intel 520 Series SSD and 10GB Fiber

I went with Intel 520 Series 480GB Solid State Discs because of the reliability, cost and Total IOPS count (42,000 Read/50,000 Write). Because of the Peak IOPS burst, I have heard horror stories about running SSD’s over 1GB so I wanted to have a nice big pipe to our SAN network, I went with a Intel SFP card that supports 10GB fiber. This fit perfectly into our SAN switches and was excited to get everything put together!

Did it fix the IOPS issue?

Yes it has! But that was its intention all along. We took the time, did the research and assembled a reasonable budget and solution that could solve an immediate crisis for our end users. Is it a permanent solution? Absolutely not! But we have seen an immediate performance improvement across the board, from recomposes, pool creation, to end user UI improvements, it has been really nice to finally know but to understand the problem.

The next steps?

Now that we have our band aid we can focus on our permanent storage solution. I am really excited to start working with various vendors and stand up some POC’s to see how the various solutions work with our systems and processes. Until then I get a lot of joy watching the performance metrics every morning during login storms go smoothly. Clone VM’s in seconds as opposed to 90 minutes! I will update this article as I can with some specific performance charts. But for now I am getting ready for our next set of problems after storage….virtualizing graphics. But isn’t that why we are doing this, to learn, understand, solve problems and make things better? I know I am!

Horizon View 5.3 available for download!

5.3 Screenshot

Hot off the Press! View 5.3 Pieces are available to download.

I will have a couple of posts detailing the upgrade process from 5.2 in the next few weeks (holidays will slow this down) but for now here are the download links:

Quick overview of some of the features for Horizon View 5.3:

  • vDGA is no longer Tech Preview, full support for NVIDIA GPU’s passed directly to VM’s
  • VSAN Tech Preview – this will be a lot of fun to play around in the lab!
  • Windows 8.1 Support
  • Multimedia Redirection for H.264 media encoding
  • View Blast protocol now supports Audio, Copy/Paste and GFX improvements (View 5.3 Feature Pack 1 and HTML Access install required)
  • USB 3.0 Support (Thin/Zero Clients must have USB 3.0 for support)
  • VCAI is fully supported (offload composer operations to your SAN – a dream come true!)
  • iOS 7 – New View Client (released last week)
  • and much more!

Andre Leibovici has a great article detailing what really is in View 5.3, you can find it here.

Install and Configure Teradici APEX 2800 Offload Card

apex2800_lowpro

What is the APEX 2800 Offload Card?

The Teradici APEX 2800 is a PCoIP Offload card for your compute nodes. What this means is when PCoIP traffic is detected on your nodes, each node that has a properly configured APEX card installed, the PCoIP software encoding compute cycles can be offloaded (read: dynamically moved) to this card. This benefits the amount of VDI machines (more importantly your displays)  from being crunched by the processors to produce visuals.

Is there a benefit? Absolutely! We have roughly 40-75 Task Worker profiles on each host, that’s 80-150 displays being crunched by the processor on top of Windows, Office and other applications. That is a big load on the processor, by moving the PCoIP processing onto the APEX we can save time and resources for those items that really need the speed of the Xeon platform. For the current firmware only 100 displays can be offloaded, still that is a ton of compute processing saved by moving to this card!

Install the APEX 2800

So now that we understand what it does, let’s get one physically installed into a host and then install the VIB file and enable the offloading.

  1. If this Host is in production, move all powered on/off VM’s to another available host.
  2. Put Host in Maintenance Mode, Shutdown Host, disconnect all cables (power, etc)
  3. Install APEX 2800 card to available PCIe slot (document slot location, try and make location consistent for all hosts in cluster)
  4. Reconnect cables and power Host up
  5. Confirm Host is at DCUI ready screen
  6. Enable SSH on Host, leave in Maintenance Mode
Install APEX 2800 Drivers into ESXi

For this part make sure you have a SSH client like Putty and a SFTP client like WinSCP to transfer the VIB package to your host. Go to Teradici My Support (requires login) to download the latest verified VIB package for your version of Horizon View (we will be using the package for View 5.2) Now that we have all of our stuff, lets load up the VIB package!

  1. SSH has been enabled from the prior steps, SSH into host
  2. Fire up WinSCP and start a session to the host.
  3. In WinSCP browse to the location where you downloaded the ZIP package (i.e. Downlads) on the left pane, on the right pane browse to /tmp/ and move that package in that folder
  4. Back in Putty lets verify the file is there by entering this command: “cd tmp” then “ls” you should now see the .zip file in that folder
  5. To install the VIB package enter this command: ” esxcli software vib install -d /tmp/apex2800-version-esxi-versionxxxxxx.zip” hit enter
  6. After installation it will spit out an install summary, if you had an existing version that was upgraded it will tell you here
  7. Close WinSCP and Putty and reboot the Host from vSphere
  8. When the Host become available again, enable SSH and exit Maintenance Mode
Install APEX 2800 Drivers for Windows

At this point we have physically installed the APEX 2800 card, installed the VIB package to ESXi, the last piece is to install the OS aware agent .  Go to Teradici My Support (requires login) to download the latest version of the OS Agent. Let’s get started!

  1. The VM can be on the Host we just installed the VIB package on.
  2. On your VM locate the OS Agent download and run the “apex2800-version-rel-xxxxx.exe” install package.
  3. Next, next, next!
  4. Finish and Reboot
  5. Time to verify the PCoIP processing is being offloaded!
APEX 2800 Commandlets

Each of these commands can be run from a SSH session, so fire up Putty and let’s verify that it’s working!

View APEX 2800 Status

/opt/teradici/pcoip-ctrl -I

View VM Usage and Monitoring Status

/opt/teradici/pcoip-ctrl -V

Enable/Disable APEX 2800

/opt/teradici/pcoip-ctrl -d <device number> -e
/opt/teradici/pcoip-ctrl -d <device number> -x

Displaying VM Property Values

/opt/teradici/pcoip-ctrl -O

All APEX2800 Commandlets

VDI Performance – Part 1 – Gold Images

To build a house you have to start with a solid foundation, I hate that saying but it’s true. It applies to VDI just the same: to have an efficient VDI environment you MUST have consistent gold images. The term gold image is what I use to base all of my View Pools on. I have a 3 step process to building my gold images:

  1. Create OS Template
  2. Create Gold Image
  3. Install Core Company/Pool Specific Software

Before I start with building my Pool Specific Gold Images I start with a Base Operating System Template for vCenter, this allows me to clone for Gold Images, Dedicated VM’s or Test Boxes with all specific Tools, Agents and other items that must be installed for a View Environment. This is the most important step in the process because this truly is the foundation for all future Gold Images in my environment. Let’s get to work!

Create OS Template

Create New Virtual Machine

In vSphere I create a new virtual machine called “Win7Gold”, I select “Custom” configuration with the following specs:

  • Name your VM
  • Choose an Inventory Location
  • Choose Host/Cluster Location
  • Choose Resource Pool (If DRS is enabled)
  • Select a Storage Resouce (Local or Shared Storage)
  • Virtual Machine Version (since I am running ESXi 5.5 I choose the latest)
  • Guest OS Version (For this example I am using Windows 7 x64)
  • CPU Sockets/Cores – 2vCPU
  • Memory – 3GB (View Best Practices say 3GB minimum)
  • Network – Choose Network and Adapter – VMXNet3
  • SCSI Controller – LSI Logic SAS
  • Create a new virtual disk (unless importing from an existing disk)
  • HDD – 32GB Thin Provision (This is a template, no need to build out the whole disk)

vCenter will now build out the base VM, time to edit some hardware!

  • Remove Floppy (You can always add to a specific Gold Image later if needed)
  • Select “Options” tab and review the Boot Options
    • Select “Force BIOS Setup”
  • Power on VM and Open VM Console
  • VM will boot into BIOS, select Advanced Tab
  • Disable all of those ports (You can always add to a specific Gold Image later if needed)

BIOS Disable Peripherals

  • Save and Exit
  • Back in vCenter – Edit VM Settings and Mount OS Media ISO (Make sure to check “Connected” and “Connect at Power On” and install your Operating System

Win 7 Install

  • Once Windows has finished installing we need to install some VMware packages before anything else because of conflicts with .Net Framework and View Agent

VMwareInstalledPrograms

Next step is to optimize Windows for Horizon View, VMware has a great guide for optimizing including a script to run, I am a little more visual and discovered VMware Flings. Flings has a great application called VMware OS Optimization Tool that can automate a lot of the tweaking for your OS Template, so let’s use it to tweak!

FlingResults

Once the Fling is installed let’s run an analysis and optimize everything except: Windows Firewall (for VMware Blast), Windows Update (latest and greatest updates) and Windows Search. Notice how only 4 are left? Everyone’s tweaks will be different but there are recommended and optional ones you can review!

Once you have optimized with Flings uninstall the OS Optimization Tool and lets get Microsoft Updates installed.

WindowsUpdates

Once Windows gets all of it’s updates (I don’t install IE10) There is one last update that we need to install related to the VMXNet3 driver (KB2550978). After this you can go into Services and disable Windows Update. Once that is disabled we can delete the update leftovers from

C:\Windows\SoftwareDistribution\Download

WindowsFeatures

After Windows Updates are finished there are some Windows Features that we don’t need use, lets remove them (optional) :

  • Games
  • Windows DVD Maker
  • Windows Media Center
  • Internet Printing Client
  • Windows Fax and Scan
  • Tablet PC components
  • Windows Gadget Platform
  • XPS Services
  • XPS Viewer

Last item before Sysprep is to erase all event logs, you can do this by running this script in an elevated command prompt:

for /F "tokens=*" %1 in ('wevtutil.exe el') DO wevtutil.exe cl "%1"

Let’s get ready to Sysprep our OS Template and convert to a VM Template for our Gold Images.

Sysprep

Run sysprep.exe at choose Generalize, OOBE, Shutdown

ConvertToTemplate

Right Click on VM and Choose Template > Convert to Template. Now that we have converted our OS Template into a VM Template we can proceed to cloning our template to build a Gold Image for View Pools. What’s nice about building a VM Template as opposed to moving to a Gold Image to start with is that now I can deploy a quick Test machine or build out Dedicated VM’s without affecting any other images. Also if you use vCloud Director, it will require having images as Templates for provisioning purposes.

Create Gold Images

Now that we have our OS Template built we can clone it to make a Gold Image. We maintain several Gold Images based on scope of work or department. Eventually we will move away from so many Gold Images and start utilizing Horizon Mirage (v2 now supports VDI). Until then let’s build out a Gold Image for our Accounting Department.

In vCenter let’s clone our Win7Template to make AccountingGold.

DeployTemplate

Each of our View Pools has specific hardware requirements based on the software they use, so let’s make some changes to address some accounting software that we will install. I will enlarge the hard drive to 72GB and increase RAM to 4GB.

TweakHardware

Since we ran Sysprep on the OS Template VM we need to go through the basic Windows Install steps of the new Gold Image. Next join the machine to the domain and reboot. Some people say to join the domain last, but it has been my experience that most software packages are located on a central file server and it makes things easier to authenticate to AD and if you are trying to configure a DSN, SQL or AD specific application.

AdjustBestPerformance

I like to tweak the display properties at this point by going to System Properties > Advanced System Settings > Advanced > Performance > Settings > Adjust for Best Performance. You can also set the Windows Theme to Windows Classic.

Install Core Company/Pool Specific Software

From here the sky is the limit on what you want to install for your Gold Images, we typically install MS Office, Flash, Chrome, Java, etc. For all applications we install, we always do a first run to ensure everything installed correctly. If you install MS Office re-enable Windows Update service and update Office (Internet or WSUS since we are in the Domain now). Once complete, disable Windows Update service and delete updates from:

C:\Windows\SoftwareDistribution\Download

So let’s assume you have installed everything that you want and are ready to provision this image for use by a View Pool, there are several things you want to tweak to make the linked-clone process go smoothly.

SDelete is a slack utility for Windows that will remove any slack in Windows partitions, since we are running our VM’s in Thin Provision I like to make sure my Gold Images are as compact as possible when the Replica’s are produced. You will need to download a copy of SDelete here. To start let’s run a check disk for the system partition:

chkdsk /f

Now we can run sdelete from a run command. NOTE: this will take some time

sdelete -z

Now lets put the Host that our Gold Image is running on into SSH mode. Shutdown your Gold Image and open Putty or your preferred SSH client.

ls
cd vmfs
ls
cd volumes
ls
cd SPECIFIC DATASTORE (Case Sensitive)
ls
cd SPECIFIC VM (Case Sensitive)
ls
vmkfstools -K SPECIFIC VMDK FILE (Case Sensitive)

Hole Punching should take a while but will eventually finish.

Once that is complete, start your VM back up and run chkdsk /f one more time. Also be sure to disable SSH in vCenter. Clear the event logs one more time.

Now let’s build our shutdown script and we will be done with our Gold Image. We use this script to release the Windows KMS key, MS Office KMS Rearm, IP release and shutdown. Create a txt file on the desktop called “script.bat” right-click on bat file and edit, insert the following text and save:

"C:\Program Files (x86)\Common Files\microsoft shared\OfficeSoftwareProtectionPlatform\OSPPREARM"
 slmgr /ckms
 ipconfig /release
 ipconfig /flushdns
 shutdown -s -t 00

Run the script.bat file and your machine is now 100% ready to become a Parent Gold Image for a View Pool, you can take a snapshot and start provisioning a View Pool!

Snapshot

Up next in our 6 Part series is Part 2 – PCoIP Best Practices

VDI Performance Series Index

Part 1 – Gold Images

Part 2 – PCoIP Best Practices

Part 3 – Persistent vs Non-Persistent

Part 4 – Storage

Part 5 – End User Experience

Part 6 – Wrap Up and What’s Next