Horizon View & vSGA – Part 2 – Install and Configure

In Part 1, we reviewed all the different options for virtualizing graphics with Horizon View, now it is time to get our hands dirty. First we decided to go with vSGA instead of vDGA for the ability to vMotion workloads to various hosts if needed. The hosts that we started with needed a RAM upgrade and a SAN connectivity card because we are doubling the load capacity on each host and need to connect to our shared storage. Here are the before and after specs for each host:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 16GB DDR3-1600Mhz RAM (Upgraded to 32GB)
  • Dual Gigabit NICs (MGMT and LAN)
  • IPMI Port (For Console Access)
  • Intel X540-T2 (Installed for SAN Connectivity)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

Lets power down the host, physically install the RAM and Intel X540-T2 card and start preparing to install VMware ESXi 5.1 U1. For future releases of View 5.3 and beyond we will install ESXi 5.5 but for now we are staying on View 5.2.

Next we will install ESXi 5.1 U1, this is a straight forward process.

Everyone has a different way of configuring ESXi, some use Host Profiles some don’t. All I will do for now is configure a single MGMT NIC, set the DNS name and disable IPv6. Once I add the host to my cluster, I will run a Host Profile Compliance Check for the remaining settings. Host will reboot, once I have confirmed that ESXi came up properly I’m going to shut it down and move it back into our datacenter. Now for the fun parts!

Now our host is powered up, I can join it to my cluster and start loading the Intel and NVIDIA VIB’s (VMware Install Bundles). I love using PowerCLI when I can so I will deploy ESXi to all of my hosts and get them joined to the cluster and run this fancy command to enable SSH on all Hosts and start the VIB upload process:

Get-Cluster "CLUSTER NAME"| Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

Time to fire up Putty (or your preferred SSH Client) and WinSCP to transfer your VIB bundles in the /tmp/ folder for each host. Install the VIB packages running the following command in Putty:

esxcli software vib install -d /tmp/software-version-esxi-version-xxxxxx.zip

After I installed the VIB’s I need to reboot each host, start SSH and Putty back in to confirm a few things and get the NVIDIA services started. Verify that the NVIDIA VIB was loaded successfully:

esxcli software vib list | grep NVIDIA

Because these Hosts have only once graphics card I need to tell ESXi to release the card and allow it to be virtualized so I will run this command to find the PCI ID of the Graphics Card:

lspci | grep -i display

I receive the following response:

00:04:00.0 Display controller: NVIDIA Corporation Quadro 4000

Then I will set the ownership flag of the Quadro Card in ESXi:

vmkchdev -v 00:04:00.0

Verify that the xorg service is running, if not then run the following commands:

/etc/init.d/xorg status
/etc/init.d/xorg start

Now that everything is set up we can confirm that ESXi has grabbed the Graphics Card and we can start monitoring GPU resources. So let’s see how much Graphics Memory we have reserved for VM’s by invoking the gpuvm command:

gpuvm It shows that there is 2GB of Video RAM and a VM is reserving some already!

Next, I will run the SMI interface in watch mode to constantly monitor the performance of the Graphics Card this allows me to see how much resources I have available.

watch -n 1 nvidia-smi

nvidia smi

Now its time to configure a Pool of VM’s that can utilize the vSGA settings that we configured and see the results of our work in Part 3!

Horizon View & vSGA – Part 1 – Intro

Since we have stepped up storage by throwing SSD’s at the problem, we now are on to the next task of getting the best graphics experience to our end users. For too long people have relied on mammoth workstations with big graphics cards to get the job done. We have  purchased several big time workstations to address our CAD and Modeling Teams. Doesn’t that fly in the face of Desktop Virtualization and consolidating all processes to the Datacenter. So the next logical step in the evolution of VDI is to virtualize 3D graphics.

Now that sounds like an easy task in this day and age of Apple’s motto of “it just works”, but imagine the time and dedication it took to create the first hypervisor. CPU and RAM are one thing, specific processing threads rendering graphics is very specific and takes time to perfect. VMware started with SVGA and Soft3D with View 5.0 and they have made major strides in graphics utilization. That being said VMware announced two exiting features in Horizon View 5.2, vSGA and vDGA. I will give a quick summary of both, but for more detailed information head over to Andre Leibovici’s Blog for a full breakdown of what they mean. Here is also the VMware White Paper of the complete line of Graphics Acceleration Deployment.

vSGA – Virtual Shared Graphics Acceleration

horizon view vsga

vSGA gives you the ability to provision multiple VM’s (Linked-Clones or Full VM’s) to single or multiple GPU’s. Graphics cards are presented to the VM as a software video driver and the graphics processes are handled by an ESXi driver (VIB). Graphics resources are reserved on a first come first serve basis so sizing and capacity is important to consider. You can also have various Pool types on a host and not all need graphics, this is important if you have various workstation classifications running in a cluster. vSGA is a great solution for users that require higher than normal graphics needs, rendering 1080p video, Open GL, DirectX, etc. We will get into configuring pools for vSGA but there are 3 options: Automatic, Software and Hardware.

vSGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q
vDGA – Virtual Dedicated Graphics Acceleration

horizon view vdga

vDGA differs from vSGA in that the Physical GPU is assigned to the VM using DirectPath I/O so the full GPU is assigned to a specific machine. vSGA allows multiple VM’s to provision resources from the GPU, with vDGA you install the full NVIDIA GPU Driver Package to the VM and the Graphics Card shows up as hardware in Device Manager. In Horizon View 5.2 vDGA is still in Tech Preview but is full availability with View 5.3. Below is the list of compatible NVIDIA GPU’s for vDGA. There are limitation to vDGA including no Live vMotion, once GPU resources are exhausted on your Host no other VM’s can be powered on, also because of the nature of the NVIDIA driver Full VM’s are required not Linked-Clones or View Composer based VM’s.

vDGA Hardware Compatibility List
  •  GRID K1
  • GRID K2
  • Quadro K2000
  • Quadro K4000
  • Quadro K5000
  • Quadro K6000
  • Quadro 1000M
  • Quadro 2000
  • Quadro 3000M
  • Quadro 4000
  • Quadro 5000
  • Quadro 6000
  • Tesla M2070Q

So now that we have a basic understanding of what vSGA and vDGA mean, we can start to realize the pros and cons of both technologies. For this first dive into virtualized graphics we decided to start with vSGA because of the ability to run other VM’s on it, since we are testing in a lab and not production, right! Our test equipment that was used was a re-purposed Supermicro 1U Server. Full Specs below:

  • Supermicro X9SRG-F 1U Host
  • Intel Xeon E5-2643 @ 3.30 Ghz
  • 32GB DDR3-1600Mhz RAM
  • Dual Gigabit NICs (MGMT and LAN)
  • Intel X540-T2 (SAN)
  • USB Flash Drive (ESXi Boot Partition)
  • PNY NVIDIA Quadro 4000 2GB Graphics Card

BOXX Hosts

We will jump into the installation and configuration in Part 2!

SSD’s saved our View Pod

Synology DS3612xs SSDs

I’ve talked with several colleagues in the virtualization arena and one of the things they all say is “VDI is tough, it’s always changing, there is nothing harder than virtualizing desktops!” I have learned this lesson the hard way. Two years ago our company deployed VMware’s VDI solution View (now Horizon View) as a proof on concept (POC) to a group of test users, these users ranged from task workers to advanced users running CPU and Graphics intensive applications. That test group was roughly 10 people, 6 months later we deployed VDI in waves to various departments and grew to over 50 users.

Now before I go any further I want to give you a background of the equipment we used to deploy the POC:

  • Dell Poweredge R620 – Intel Xeon E5-2690 2.9 Ghz, 128GB RAM, (6) 1GB NIC’s
  • HP ProCurve 5412zl L2/L3 Switch
  • Dual Dell PowerConnect 24 Port Gigabit Managed Swtiches (SAN Network)
  • Dell Equalogic PS 6100 (48TB Raw) – Total IOPS – 1300

The POC had been deployed before I joined the company and at the time the VDI experience was very good. But as we continued into production, we started seeing performance hits at random times. I started in April of 2012 and was working in another area of IT but was quickly attracted to the allure of VDI and everything VMware. So in my spare time I started doing research into VDI performance issues, I learned about PCoIP offloading, CPU and RAM issues, sizing Gold Images properly, etc. I threw everything out that I knew and started over with new Gold Images, same performance issues. This all happened over 15 months.

The problem was right in front of us…

Then it occurred to me (read: Google, forums, talking w/ vExperts) that storage was our issue. I started reading everything about Total and Peak IOPS and how it relates to VDI, I started scoring our various Gold Images and discovered that some of our images had Peak IOPS of over 150! Do the math…..the Equalogic that we were running had a peak of 1300 IOPS, at this point we had over 180 users, so do that easy math: 180 users x 25 IOPS (average) = 4500 IOPS!!!!! Houston, we have a problem.

The Solution…sorta

So what did we do? It’s simple but not easy! We realized that as we grew our VDI environment that we improved everything except storage. We upgraded to bigger, more powerful hosts, improved our Core Switch architecture and expanded to larger SAN switches, upgraded our Power and Environmental systems. We did every upgrade except storage. This is not a slight towards our team or myself, we just didn’t have the knowledge and experience to truly understand what we were dealing with in VDI. Getting back to the solution (that is the title of this article right?) we started meeting with and sizing solutions around various vendors and in the meantime I got the idea to buy a Synology NAS load it up with some SSD’s and give us a fairly inexpensive band aid until we can properly implement a permanent storage solution.

In the left corner….Synology DS3612xs

DS3612xs

So let’s talk about the Synology DS3612xs because this thing is a beast! I chose this model specifically because of the 12 bay capacity and its ease of transition into our test lab environment (I’m begging my boss to buy it for my Home Lab!) The specs for this thing are really impressive:

  • 12 Drive Bays (Expandable to 36 with Add On Chassis)
  • Intel Core i3 CPU
  • 8GB RAM
  • 4 1GB NICs
  • Available PCIe bay (did someone say 10GB?)
  • vSphere 5 support with VAAI
  • SSD TRIM Support
  • Synology Awesomesauce DSM operating system
In the right corner….Intel 520 Series SSD and 10GB Fiber

I went with Intel 520 Series 480GB Solid State Discs because of the reliability, cost and Total IOPS count (42,000 Read/50,000 Write). Because of the Peak IOPS burst, I have heard horror stories about running SSD’s over 1GB so I wanted to have a nice big pipe to our SAN network, I went with a Intel SFP card that supports 10GB fiber. This fit perfectly into our SAN switches and was excited to get everything put together!

Did it fix the IOPS issue?

Yes it has! But that was its intention all along. We took the time, did the research and assembled a reasonable budget and solution that could solve an immediate crisis for our end users. Is it a permanent solution? Absolutely not! But we have seen an immediate performance improvement across the board, from recomposes, pool creation, to end user UI improvements, it has been really nice to finally know but to understand the problem.

The next steps?

Now that we have our band aid we can focus on our permanent storage solution. I am really excited to start working with various vendors and stand up some POC’s to see how the various solutions work with our systems and processes. Until then I get a lot of joy watching the performance metrics every morning during login storms go smoothly. Clone VM’s in seconds as opposed to 90 minutes! I will update this article as I can with some specific performance charts. But for now I am getting ready for our next set of problems after storage….virtualizing graphics. But isn’t that why we are doing this, to learn, understand, solve problems and make things better? I know I am!

Veeam ONE Monitor Free Edition Review

VeeamONEFree

As 2014 is about to start, December is always a time to have internal IT meetings on how to improve processes, workflows and responsiveness. This year our IT department experienced positives and negatives in those categories from power outages to unexpected server down times. The big takeaway from 2013 was being a more proactive team when it comes to our virtual system and resolving problems before they become downtime scenarios.

So as I set out to do research (read: Google) on proactive monitoring solutions for our virtual infrastructure I came across several good candidates like SolarWinds, Xangati, vCOPS and Veeam. I think money grows on trees, my bosses think otherwise so I decided to deploy VeeamONE Free Edition to see if it cut the mustard for what we wanted and if the free version is an actual solution or just a digital carrot dangling in front of the budget waiting to be purchased.

There are a few differences/limitations to the free version, I have highlighted them below, now that we have that covered let’s find out if VeeamONE is really free and usable.

Veeam ONE Monitor Free vs Paid

Installation

Installation was quite easy to stand up. Built out a VM with the necessary “hardware” requirements on a Windows 2008 R2 x64 box. Everything was pretty much next, next, next, except for a restriction we have on SQL creations, that was an easy fix. We simply ran the CREATE script that came packaged with the installer, ran it in SSMS and the database was created. Verified the default ports and linked VeeamONE to our primary vCenter server and assigned some users to the Users and Admin groups. Reboot once and everything came back up just fine.

Configuration

Configuration was a breeze, there are two types of roles, Admins and Users, biggest difference I can see is Users are limited to what changes can be altered to events where as Admins have higher control. Email notifications are limited to the canned responses in the Free edition but is sufficient for what we wanted. Setup your SMTP settings and go!  With notifications you can include known KB articles with a specific issue, this is a helpful step for your lower tier Help Desk guys if they don’t live on VMware’s KB site like I do! If you have a broader SNMP capture system, VeeamONE links up nicely. On to views and the dashboard.

Views and Dashboard

This is where I personally think VeeamONE shines, you have 3 views to choose from: Infrastructure, Business and Data Protection.  In the Free Edition, Data Protection is unavailable as it relates to Veeam Backup & Replication and Enterprise Manager for a higher level view of your environment as it relates to your data integrity, this is a view we would like, but completely understand why Veeam left it out of the free edition. The Infrastructure View is where I live, it gives me a complete breakdown of my vCenter environment separated by Datacenter, Clusters, Hosts, VM’s, Resources and Datastores. As referenced in the free vs. paid chart, some notifications are limited but it is still a ton of information to get you closer to resolution.

My favorite is the Dashboard view, we are a VDI shop at so I built out a Kiosk Mode VM that auto loads the VeeamONE Client in full screen mode (pictured above) that gives me a dedicated station in my office to turn around and focus on a specific problem or event. Lately we have been testing some VDI users on a certain Synology DS3612XS with SSD’s (article coming soon!), it’s been nice to see statistics on performance and be alerted if the datastore spikes with latency.

Conclusions

VeeamONE Free Edition is a great compliment to your vCenter environment and has helped up isolate issues that we weren’t even looking for. Veeam has done a great job giving a lot of functionality in a free edition, there are some limitations that will make us seriously consider the paid version (Management likes reports!). But with some knowledge in PowerShell and PowerCLI vCheck can help with this! We have only had it up and running for 2 weeks and I obsessively knock out all the events that come across my inbox from the notification system. It has made us think twice about issues before diving in. I would highly recommend standing up the free version in your environment, what do you have to lose but a little more pro-activeness and maybe a different view on your vCenter environment!

VeeamONE Monitor Free Edition Link

vBlog voting starts soon…

celebration

I woke up to some good news yesterday, Eric Siebert from vSphere-land updated his vBlog list and announced that voting for vBlog 2014 will be starting soon. He has updated his vLaunchPad site and I am happy to have been added to the list of awesome bloggers and contributors of the technology community. He will be updating the site and add categories for nominations. I have a few articles to post in the next few days so be on the lookout….maybe something about a certain Synology DS3612xs that we have running some VDI….so stay tuned!

Horizon View 5.3 available for download!

5.3 Screenshot

Hot off the Press! View 5.3 Pieces are available to download.

I will have a couple of posts detailing the upgrade process from 5.2 in the next few weeks (holidays will slow this down) but for now here are the download links:

Quick overview of some of the features for Horizon View 5.3:

  • vDGA is no longer Tech Preview, full support for NVIDIA GPU’s passed directly to VM’s
  • VSAN Tech Preview – this will be a lot of fun to play around in the lab!
  • Windows 8.1 Support
  • Multimedia Redirection for H.264 media encoding
  • View Blast protocol now supports Audio, Copy/Paste and GFX improvements (View 5.3 Feature Pack 1 and HTML Access install required)
  • USB 3.0 Support (Thin/Zero Clients must have USB 3.0 for support)
  • VCAI is fully supported (offload composer operations to your SAN – a dream come true!)
  • iOS 7 – New View Client (released last week)
  • and much more!

Andre Leibovici has a great article detailing what really is in View 5.3, you can find it here.

Top vBlog 2014 – Starts Soon

If you are not familiar with Eric Siebert’s Blog and vBlog awards please check out his site. Eric hosts an annual Top vBlog Awards to those dedicated several that blog about Virtualization and Storage. For 2014, Eric has got some very special news, Veeam is sponsoring it so not only can you brag about being one of the best vBlogs (think – Highlander) but you could also win some cool prizes like a Mac Mini, iPad Mini, HP MicroServer, Beats by Dre Headphones, Roku 3 and a Wii U.

Call for nominations  for blogger categories will start in December with official voting beginning in January. For more information about vBlog 2014, check out Coming Soon: Top vBlog 2014 Edition on vsphere-land.com

Manually Remove vCenter Server from SSO

Slide1

I was playing around in our test lab with Linked Mode vCenter Servers last week and ran across an error in the vSphere Web Client after I removed the second vCenter Server. The specific error I got was: Could not connect to one or more vCenter Server Systems: vcenter address:443/sdk

My first guess was that the uninstallation was successful but that SSO had held onto some remnants of the second vCenter so it needed to be manually unregistered with the Lookup Service. Here is what I did to get everything fixed.

Credit Due: Mark Almeida-Cardy at vi-admin.net has a great article about how to resolve this with vCenter 5.1 so I will use his post with updates for vCenter 5.5

VMware has a KB Article 2033238 that lays out the steps for vCenter 5.1 as well.

For Windows: <SSO install directory>\ssolscli\ssolscli listServices <Lookup Service URL>

For vCenter Server Appliance: /usr/lib/vmware-sso/bin/vi_regtool listServices <Lookup Service URL>

  1. In the list of services, locate the service entry that contains the address of the system where the solution was installed.
  2. Record the ownerId of the service entry.
  3. In the vSphere Web Client, navigate to Administration > SSO Users and Groups > Application Users and locate the application user with the same name as the ownerId you recorded.
  4. Right-click the user and select Delete Application User.
  5. At the command line, remove the service entry from the Lookup Service.
    1. Create a text file that contains the service ID.
      The service ID must be the only text in the file.
  6. Unregister the entry for the solution by running the unregisterService command.Note: It be necessary to Set your JAVA_HOME environmental variable (default jre location below).
    set JAVA_HOME=c:\program files\vmware\infrastructure\jre

For Windows: <SSO install directory>\ssolscli\ssolscli unregisterService -d <Lookup Service URL> -u “Lookup Service administrator user” -p “administrator password” -si <serviceId file>

For vCenter Server Appliance: </usr/lib/vmware-sso/bin/vi_regtool unregisterService -d <Lookup Service URL> -u “Lookup Service administrator user” -p “administrator password” -si <serviceId file>

Script I Used: ssolscli.cmd listServices https://VCENTER FQDN:7444/lookupservice/sdk > C:\sso_services.txt

Output txt file looked like this: 

Intializing registration provider…
Getting SSL certificates for https://VCENTER FQDN:7444/lookupservice/sdk
Anonymous execution
Found 15 services.

Service 1
———–
serviceId=Default-First-Site:9a003c74-4229-4d60-b89d-a0814ea00060
serviceName=VMware vCenter Support Assistant, WebClientPluginPackage
type=vsphere-client-serenity
endpoints={[url=https://IP ADDRESS:8443/plugin/package/ph-admin-ui.zip,protocol=http]}
version=1.0.0.1398556
description=
ownerId=support-assistant-localhost.localdom-21cb77ad-266c-4f84-9262-a1c0ddf1726c@vsphere.local
productId=com.vmware.phonehome
viSite=Default-First-Site

etc…..

Next let’s identify the services that we need to unregistered and copy/past the serviceId’s into another txt file (remember the name and location of this file)

Now we can run our unregistered script, mine looked like this: ssolscli unregisterService -d https://VCENTER FQDN:7444/lookupservice/sdk -u “LOOKUP SERVICE USERNAME” -p “PASSWORD” -si <FILE LOCATION>

Here is the result that I got:

C:\Program Files\VMware\Infrastructure\VMware\CIS\vmware-sso>ssolscli unregister
Service -d https://VCENTER FQDN:7444/lookupservice/sdk -u “LOOKUP SERVICE USERNAME” -p “PASSWORD” -si C:\sso_services.txt
Intializing registration provider…
Getting SSL certificates for https://VCENTER FQDN:7444/lookupservice/sdk
Service with id “Default-First-Site:cdda2053-438a-439d-95aa-b47081f94e42” is successfully unregistered
Service with id “Default-First-Site:31d628f8-60e7-4955-a9aa-fd3e3a24bb31” is successfully unregistered
Service with id “Default-First-Site:6e21c57b-da61-460b-b6b2-ef82a3647dad” is successfully unregistered
Return code is: Success 0

Second instance of vCenter has been remove and no error on start up of vSphere Web Client….Happy Admin!

 

Veeam B&R v7 Support for vSphere 5.5 is here!

Veeam_v7

Veeam has released Update 2 for Backup & Replication v7 today with full support for vSphere 5.5 and Hyper-V 2012 R2 (the first backup company to support both!), here is a breakdown of all the new and exciting features:

VMware

  • vSphere 5.5 support, including support for 62TB virtual disks and virtual hardware v10 virtual machines.
  • vCloud Director 5.5 support.
  • Support for Windows Server 2012 R2 and Windows 8.1 as guest virtual machines (VMs).
  • Added ability to limit maximum amount of active VM snapshots per datastore to prevent it from being overfilled with snapshot deltas. The default value of 4 active snapshots can be controlled with MaxSnapshotsPerDatastore (REG_DWORD) registry key.

Microsoft

  • Windows Server 2012 R2 Hyper-V and free Hyper-V Server 2012 R2 support, including support for Generation 2 virtual machines.
  • Support for Windows Server 2012 R2 and Windows 8.1 as guest virtual machines (VMs)
  • Support for System Center 2012 R2 Virtual Machine Manager (VMM)
  • Support for the installation of Veeam Backup & Replication and its components on Windows Server 2012 R2 and Windows 8.1.

Built-in WAN acceleration

  • Increased data processing performance up to 50% with hard drive based cache, and up to 3 times with SSD based cache. Multi-core CPU on source WAN accelerator is recommended to take full advantage of the enhanced data processing engine.

Replication

  • Added ability for source and target proxy servers to reconnect and resume replication when network connection between source and target site drops for a short period of time.

Tape

  • Added support for a number of enterprise-class tape libraries with partitioning functionality that allows presenting multiple tape library partitions to the same host.
  • Import/export slot interaction has been redesigned to add support for a number of IBM and Oracle tape libraries.

Application-aware processing

  • Added ability for application-aware processing logic to detect passive Microsoft Exchange DAG database present on the VM, and process it accordingly.
  • Added support for Exchange CCR clusters.

User interface

  • User interface should now remember size and positions off the main window, as well as all panels and columns.

I am proud to say that we upgraded our Backup Environment this morning and everything is running great. Big thanks to Veeam and their Engineering Team for releasing this so fast!

Here is a link to the KB article for this Update.

Install and Configure Teradici APEX 2800 Offload Card

apex2800_lowpro

What is the APEX 2800 Offload Card?

The Teradici APEX 2800 is a PCoIP Offload card for your compute nodes. What this means is when PCoIP traffic is detected on your nodes, each node that has a properly configured APEX card installed, the PCoIP software encoding compute cycles can be offloaded (read: dynamically moved) to this card. This benefits the amount of VDI machines (more importantly your displays)  from being crunched by the processors to produce visuals.

Is there a benefit? Absolutely! We have roughly 40-75 Task Worker profiles on each host, that’s 80-150 displays being crunched by the processor on top of Windows, Office and other applications. That is a big load on the processor, by moving the PCoIP processing onto the APEX we can save time and resources for those items that really need the speed of the Xeon platform. For the current firmware only 100 displays can be offloaded, still that is a ton of compute processing saved by moving to this card!

Install the APEX 2800

So now that we understand what it does, let’s get one physically installed into a host and then install the VIB file and enable the offloading.

  1. If this Host is in production, move all powered on/off VM’s to another available host.
  2. Put Host in Maintenance Mode, Shutdown Host, disconnect all cables (power, etc)
  3. Install APEX 2800 card to available PCIe slot (document slot location, try and make location consistent for all hosts in cluster)
  4. Reconnect cables and power Host up
  5. Confirm Host is at DCUI ready screen
  6. Enable SSH on Host, leave in Maintenance Mode
Install APEX 2800 Drivers into ESXi

For this part make sure you have a SSH client like Putty and a SFTP client like WinSCP to transfer the VIB package to your host. Go to Teradici My Support (requires login) to download the latest verified VIB package for your version of Horizon View (we will be using the package for View 5.2) Now that we have all of our stuff, lets load up the VIB package!

  1. SSH has been enabled from the prior steps, SSH into host
  2. Fire up WinSCP and start a session to the host.
  3. In WinSCP browse to the location where you downloaded the ZIP package (i.e. Downlads) on the left pane, on the right pane browse to /tmp/ and move that package in that folder
  4. Back in Putty lets verify the file is there by entering this command: “cd tmp” then “ls” you should now see the .zip file in that folder
  5. To install the VIB package enter this command: ” esxcli software vib install -d /tmp/apex2800-version-esxi-versionxxxxxx.zip” hit enter
  6. After installation it will spit out an install summary, if you had an existing version that was upgraded it will tell you here
  7. Close WinSCP and Putty and reboot the Host from vSphere
  8. When the Host become available again, enable SSH and exit Maintenance Mode
Install APEX 2800 Drivers for Windows

At this point we have physically installed the APEX 2800 card, installed the VIB package to ESXi, the last piece is to install the OS aware agent .  Go to Teradici My Support (requires login) to download the latest version of the OS Agent. Let’s get started!

  1. The VM can be on the Host we just installed the VIB package on.
  2. On your VM locate the OS Agent download and run the “apex2800-version-rel-xxxxx.exe” install package.
  3. Next, next, next!
  4. Finish and Reboot
  5. Time to verify the PCoIP processing is being offloaded!
APEX 2800 Commandlets

Each of these commands can be run from a SSH session, so fire up Putty and let’s verify that it’s working!

View APEX 2800 Status

/opt/teradici/pcoip-ctrl -I

View VM Usage and Monitoring Status

/opt/teradici/pcoip-ctrl -V

Enable/Disable APEX 2800

/opt/teradici/pcoip-ctrl -d <device number> -e
/opt/teradici/pcoip-ctrl -d <device number> -x

Displaying VM Property Values

/opt/teradici/pcoip-ctrl -O

All APEX2800 Commandlets