As I mentioned in a previous post, I’ve spent the last few weeks working with a product called Login VSI. What does it do? Well it essentially forms part of a virtual desktop deployment toolkit in the sense that it helps to benchmark performance of a VDI or SBC (Server Based Computing, such as Remote Desktop Services/Terminal Server) environment and essentially provide accurate end user performance metrics (OS and application response times) to outline the “tipping point” in performance of a VDI deployment.
For those who’ve already done several VDI deployments, you’ll already know the level of detail (and in some senses, educated guesswork) that goes into designing a solution. The types of questions posed include :-
– How many desktops do I need?
– How many IOPS do I need?
– How many physical disks do I need to provide the amount of IOPS?
– What sort of user metrics do I have from desktop assessment phases of the project?
– What are the requirements on the network fabric?
There are a lot more questions along similar lines, but all are important in the design of the solution to ensure it is fit for purpose. Once all numbers have been crunched, a design comes out of the other end that we hope will cut the mustard when it’s put into production.
Login VSI can help in this instance because it simulates users logging into the SBC/VDI environment and performing tasks expected of end users. As such, there are several pre-defined workloads that can be used to simulate real life examples. For example, the medium workload (which comes with the free licence) simulates a user logging in, browsing their Outlook mailbox, manipulating a Word document, PowerPoint presentation, Excel spreadsheet, PDF document, ZIP archive and website browsing with a Flash component (Kick-Ass trailer, which is a very funny movie if you haven’t seen it already!). Timers are built into the process to simulate random wait times when a user drinks coffee, sends a text or talks to a colleague, for example. There’s nothing so random as a human being, so it’s not precise but it does represent a “scattered” workload as you’d see in reality.
LoginVSI Architecture
The refreshing approach from Login VSI is that you don’t need to spin up a SQL Server to capture your performance metrics and environment configuration (don’t you just get tired of having to commission a SQL box every time you need to fart?). This means that as well as reduced initial cost, the complexity is lower and the time to be up and running is reduced. All you need to provide are four elements :-
– LoginVSI Share (can be anywhere on the network, but must be reachable and writable by all devices used in the test)
– Login VSI Launcher (Windows machine that can be physical or virtual, which essentially performs the logins and spawns the test workloads)
– Login VSI Target (Windows machine that has MS Office pre-installed, along with some other tools such as Flash Player, BullZip, Internet Explorer)
– Active Directory (a Login VSI OU is created, along with a Group Policy Object and some scripts that get copied into the NETLOGON share)
The good news is that you don’t need to rummage around dusty corners of the internet to get these tools, each of the four parts above some with their own installer. A handy graphic lifted from Login VSI’s website below illustrates the simple architecture of the product :-
LoginVSI Configuration
One thing that caught me out was that my VSI Share was on a Windows 7 machine. This would be fine on a very small scale, but I was caught out by the fact that Windows 7 shares do not permit more than 20 simultaneous connections. Login VSI exhibits the behaviour that the target sessions starts and the user logs in, but the desktop just sits there and does not spawn any application sessions. This had me confused for quite a while as there are no error messages as such. If you go to one of the stalled desktops, unlock KidKeyLock (by typing vsiquit) and type in the UNC path of the VSI Share in Start | Run, you will see an error about the number of concurrent connections to a Windows 7 share. Save yourself a lot of time and put the VSI Share on a Windows server or Samba share!
In a VMware View or XenDesktop environment, you need to run the target setup routine on your master image before you spin up a desktop pool/catalog. This ensures that all of the desktops to be tested have all the appropriate software installed. You also need to ensure you have Microsoft Office installed in advance. Any version from 2003 upwards is fine, but if you’re testing Office 2007, it’s recommended to install SP2 beforehand, as there are some known issues with Outlook that are resolved by this patch.
Once you have your VSI Share, your launcher workstation(s) (each launcher will take a maximum of 50 targets, though my testing tended to work better with a maximum of around 35) and your targets, you’re pretty much set. The next stage from here is to configure your environment using the Management Console. The main points of interest here are configuring the launcher names and configuring the workload settings, such as workload type (light, medium etc.) and also peripheral settings such as the Microsoft Office version (if the wrong version is listed, this can prevent the automated workload from running successfully). The management console itself is pretty straight forward and self explanatory.

The screen shot above shows the test configuration. This is where the workload type is selected (Light, Medium etc.) and also connection settings to the VDI environment. As you can see from the above screen shot, Python is being used to connect to a Citrix XenDesktop web interface. This is because the login screen for Web Interface had been customised, and the Citrix connector for Login VSI could not recognise buttons on the screen such as login and selecting the available desktops. Citrix themselves provide some Python scripts to provide connectivity and these work just fine. In a View environment, the existing Login VSI connector would probably work just fine, as would a “vanilla” XenDesktop environment.
The next step before actually getting to the testing phase is to define your launcher machines (use the Windows NetBIOS name, rather than a DNS name or IP address, or you’ll likely see a few errors) and configure what settings you want for the workload itself. In my experience, the only setting you really need to look at is the Office version string, so 14 for Office 2010, 12 for Office 2007 and 11 for Office 2003. The screenshot below illustrates the settings.


Further Steps
You also have the option of creating custom workloads, but this is not something I have experience of and to be honest, not something I really had a need to use. If you just need some general benchmarks from your VDI environment, the Medium workload is recommended and used by most vendors when they produce performance white papers for their VDI solution (See Microsoft, Citrix and Equalogic for examples).
At this stage, I’m not going to get too invested in the nuts and bolts of how the whole process works, but needless to say if you’ve got this far, you’re pretty much ready to go. None of the workloads require access to the internet, nor do they require a connection to an Exchange server or any other network location. All workloads are fully isolated and self contained. If you’ve done all the setup and configuration successfully, you’re now at the stage where you can actually run some tests. Consult the Login VSI documentation for session specific settings, such as number of sessions, time delays between starting sessions (try and make this value sensible, so you don’t saturate your VDI hypervisor within a few minutes of starting the test, although if you’re simulating an academic environment, this may be important to you).
Once you’re ready to start the sessions, you should have the launcher agent running on your launcher workstations (a command prompt box that pings the VSI share for work to do) and all target machines spun up and ready to be logged into. In part two of this blog, I’ll tell you more about how to interpret the results. Stay tuned!