23-12-14

AirWatch Local Event Review

IMG_20141210_094337648_HDR

A bit dated I know, but on 10th December my wanderlust took me to Milton Keynes, the home of concrete cows and the National Bowl. As an aside, I have mixed feelings about the place – saw the Foo Fighters there in 2011 and had one of the best days of my life but then I paid £7 for fish and chips at the same event, which frankly brought me out in a cold sweat. They weren’t even that good!

Anyway, I digress. The purpose of me attending this event was to get a much better idea of what the AirWatch suite of products is all about. Obviously I’m primarily a Horizon View and EUC guy and now that VMware have acquired AirWatch, there is an obvious overlap between the two technologies. In point of fact, some of the AirWatch solution is now bundled in the Horizon Suite. The event itself was for both partners and end users, and looking at the name tags there was quite a decent spread of industries represented, from partners to local government, education, legal and emergency services.

The day was hosted by Dave Horton of AirWatch and basically took the form of a single presenter with multiple different focuses throughout the day, which was slightly unusual but seemed to work pretty well and there was a good flow to proceedings. I’d assumed beforehand that AirWatch was simply another MDM solution, but as it turned out I couldn’t have been further from the facts. Each section I broke down below.

Company Update

As at 2014, the company has 14,000 customers worldwide which represents growth of 8,000 from two years ago when the figure was 6,000 customers. The interesting thing about AirWatch as a suite of products is that not only is it the market leader, but it is in the top right corner of the Gartner Magic Quadrant. I take some industry analyst information with a grain of salt, but I know these sorts of facts hold a lot of influence with C-level decision makers, so this is an important differential for AirWatch to have.

What struck me about the customer breakdown was the breadth of industries where the AirWatch solution was present. As a snapshot, see below:-

  • 4 out of 5 Fortune 500 companies
  • 6 out of 10 top airlines
  • 3 out of 5 top ranked universities
  • 2 out of 3 worldwide hotel groups

So as you can see from above, the solution doesn’t just fit well in one vertical but across many, and I thought this pretty impressive. 17 languages are supported, from support through to the software interfaces themselves and technical support. Product development is primarily centred on Atlanta but there is also a development centre in Bangalore. There is a strong presence in the UK out of the Milton Keynes office which hosts technical support, sales, Professional Services and marketing.

AirWatch was acquired by VMware for $1.5bn, which I think the presenter said was the same worth as YouTube when that was bought. Take that, fact fans!

AirWatch 7.3 Release Highlights

IMG_20141210_103557536

Dave then went over what is new in version 7.3 of the product, which I think he said has been available for around three months now. It’s easier to break this down into bullet points, see below:-

  • MDM – Enrolment to MDM can be restricted and there is a new feature for custom terms of use/EULA
  • QNX Agent support – this is at a fairly early stage but QNX is Linux based and forms the basis for BlackBerry 10 OS. It is also used in an increasing number of things such as cars and also appliances (Internet of Things, etc)
  • It is now possible to create a temporary administrator account for troubleshooting
  • Smart Groups can be created to exclude users from profile assignments (less restrictive for VIPs, for example)
  • Profiles now have version control
  • Compliance rules can be applied per platform (iOS, Android, WinPho etc.)
  • There is a self service portal where a user can self register a device into AirWatch, thus reducing impact on the Service Desk
  • Android is not easy to develop to, as there are up to 16,000 variants of the code out there on devices
  • Samsung SAFE and KNOX are now supported
  • Amazon Fire devices, HTC devices now fully supported with silent application install and uninstall prevention
  • OSX AirWatch AppCatalog now supported and the Apple Device Enrolment Program is also supported, iOS can be pre-locked when shipped
  • Windows 8.1 now has MDM APIs and a VMware Fusion profile
  • Windows 8.1 has remote lock, enablement of Metro applications without a Live ID, agent compliance with Windows Updates, BitLocker enforcement and Firewall status
  • Rugged Android and Windows Mobile support
  • Mobile Application Management has application wrapping, Volume Purchasing Program renewable sTokens. Integration with Horizon Workspace
  • AirWatch Inbox application now optimised for iPad with 1 click conference call
  • AirWatch Tunnel provides SSL-VPN on iOS
  • AirWatch agent can tell if a device has been rooted or jailbroken, if the device has been compromised a remote wipe can be performed
  • Secure Content Locker can whitelist/blacklist file types and creation of secure content on mobile application is now supported. Possible to set policy to require uploads to be backed up to corporate storage
  • Geofencing uses GPS to tie profiles to physical locations, though this is not as feature complete as NPS
  • App wrapping now supports single sign on
  • Applications flagged as corporate and managed by AirWatch will be deleted by Enterprise Wipe even if it is a public applications from iTunes etc.
  • URLs can be transparently redirected to use SSL-VPN

Laptop Management

IMG_20141210_104017658

  • Configuration management of laptops can provide the following features:-
    • Connections to wi-fi, VPN, Ethernet etc.
    • Certificates
    • OWA and Outlook accounts
    • Proxy settings
    • Software distribution over the air
    • Automated workflows for installation

IMG_20141210_105613864

  • Asset tracking can provide the following features:-
    • Detailed laptop and end user inventory
    • Export reports and logs from the AW console
  • Enterprise wipe removes applications and it’s data because the app has a sandbox – other data is left intact (including personal data)
  • AirWatch Inbox and browser currently available on Windows only
  • In the profile wizard, hover over the platform with the pointer to see what policies can be enforced (wi-fi, FDE, etc)
  • Device or user profiles can be used
  • BitLocker keys can be stored in AirWatch
  • Full Disk Encryption for personal and/or corporate applications on OSX
  • Android Secure Launcher features:-
    • Highly restricted
    • Drag and drop icons onto simulated screens in management console to simulate the end user experience
    • Add wallpaper, folders, app icons etc to the start screen
    • Device restart always restarts on secure launcher

Windows Phone 8.1

  • Blacklist applications by vendor, i.e. Rovio
  • Disable Store option
  • Lock wi-fi settings
  • Silent install and update

Android

IMG_20141210_103643426

  • Android Workspace is classed as managed which is not as featured as MDM managed but is a better option for BYOD use cases as opposed to corporate devices
  • Analytics, so which applications are used and when and for how long, web browser which sites are visited and for how long etc
  • Workspace provides “dual persona” mode for personal devices, so corporate information can be wiped leaving personal data intact

Content

IMG_20141210_103353288

  • Dynamic watermarks can be added – can see who may have leaked corporate documents. Fully customisable.
  • Set effective expiration date for documents
  • Content can be stored anywhere:-
    • AirWatch Cloud
    • Office 365
    • OneDrive
    • Google Docs
    • Amazon S3
    • SharePoint on premises
    • Hybrid solutions
    • Network share
  • Documents stored in Secure Content Locker can be edited, annotated and add comment tags, also has activity feed
  • New in 7.3 is creation of documents in SCL
  • Report on content statistics – expiring content/missing devices etc
  • SCL can be used with MDM, Workplace or standalone mode
  • SCL is HTML5 so drag and drop files and multiple file selection is available
  • 36 repositories available – anything that supports the CMIS standard is supported
  • Use the Mobile Access Gateway (MAG) when services are not public facing
  • SCL sync client for Windows and Mac OSX like DropBox with automatic synchronisation
  • Share documents on SCL with external users, password protection provided
  • AirWatch MarketPlace has partner solutions that integrate with AirWatch
  • Real time chat collaboration with audit compliance and encryption like BBM. Integration with Lync is on the roadmap

The main takeaway from this event was that AirWatch is far more than just mobile device management. I went there with a pre-conceived idea that it was basically for management of iOS and Android devices, but it’s a whole lot more. I know that we are getting good traction at ANS around Secure Content Locker, as many customers want “Enterprise Grade DropBox”. It was interesting to see how consumer electronics have bled features and expectation into the usually highly management and slow to react corporate environment.

The other thing I was very impressed with was the management interface for AirWatch. It was very slick and very well designed, and the workflows seemed very intuitive. Obviously a demo only shows you so much, but I hope to get much more hands on in 2015 with the product suite, it’s easy to see why it is the market leader and well regarded by industry analysts. It can cover a wide spread of use cases and platforms and has excellent integrations into other platforms which may already be in customer use. I also liked the way that the issue of preserving corporate data and ring fencing personal data on non corporate devices has been addressed in this release.

 

IMG_20141210_113212588

 

Apologies for the poo pictures – the perfect storm of a camera phone and the projector not being too bright!

 

28-11-14

Notes From The Trenches : Horizon View Multimedia and Graphics Tuning

In the last few weeks I’ve spent a lot of time with different customers from different verticals looking at the performance of multimedia playback and hardware graphics rendering for applications. I thought I already had a good grasp of how all of these technologies work and how best to tune them, but it’s not until you’re presented with varying requirements that it really puts your skills to the test trying to find bottlenecks and in some cases, turning what you think you knew on it’s head.

This post relates solely to VMware Horizon View, but I guess could be applied to any virtual desktop solution, whether that be XenDesktop, vWorkspace or anything else. From a View perspective, we are always hammered with the message “know your use cases”. In many ways, this message gets saturated to the degree that a touch of snow blindness kicks in and you just make some general assumptions about what will work and what won’t.

To recap, at the very least before starting a View proof of concept (PoC), you should be asking yourself the following questions :-

  • How many users will require multimedia playback?
  • What level of quality is expected? 480p? 720p?
  • Will the videos be in full screen or will a smaller window suffice?
  • What applications will require OpenGL or DirectX support?

By covering off the above points, you put yourself in a position where the proof of concept can address a high proportion of user requirements. Don’t beat yourself up though if you don’t nail it 100% out of the gate, getting it right on day one is unheard of! Taking each point in turn, let’s look at why the responses to these points matter.

How many users will require multimedia playback?

Quite often, organisations limit access to video streaming sites such as YouTube, Vimeo, iPlayer and others to preserve office bandwidth. From a View perspective, this can do you a favour but there are always uses that may well slip under that radar and result in users having a degraded VDI experience that gives the solution an unmerited bad name. News websites such as Sky News or BBC News all have multimedia content these days, from radio interviews to video clips in small boxes with a full screen option.

The result of this question will help determine an appropriate pool design. If you have users that will use a lot of video (presenters, trainers, academics, for example), then it would make sense to have a dedicated pool of desktops for these users with more resource than would be given to “regular” users. Also, VMware’s recommendation is that good quality multimedia playback requires 2 vCPUs per virtual desktop, which in turn affects your infrastructure sizing requirements as this will impact your host:desktop consolidation ratios.

If the answer to this question “all users” then the next step is to determine what quality is expected.

What level of quality is expected? 480p? 720p?

If there are users expecting full screen 1080p native performance at 60 fps (same as a blu ray player etc.) then this level of expectation should be reset. The nature of a virtual desktop solution means that this won’t be achieved due to bandwidth, compression, hardware resource and other potential bottlenecks along the way. Remember a blu ray player has a single cable running from it to the TV, so there isn’t a bunch of other traffic flying down that cable, the cable length is typically less than a metre (or 3 feet 3⅜ inches in old money!) so it’s barely even an apples to oranges comparison.

In a View environment, PCoIP is the display protocol of choice. RDP is also available, but generally lacks the flexibility and tuning options afforded to us by PCoIP. In addition to this, there are several hardware options to help augment and improve the PCoIP experience. If you recall, Teradici are the owners of PCoIP and VMware use this protocol in View and Teradici have both host card based and zero client based options to offload PCoIP processing to dedicated hardware.

One other thing worth bearing in mind is that in my experience, the human eye can’t really tell the difference between 720p and 1080p, so if you get mired in discussions with end users about this, you’re kind of missing the point. If you can deliver video at a good resolution (720p) and a good frame rate, the rest is just splitting hairs in my opinion.

Will the video be in full screen or smaller screen?

This fact obviously matters because the larger the video playback surface area, the more resource required to push it along. In the tests I’ve done, standard 480p video in a full screen uses around 20MB more GPU RAM than a video embedded into a web page (BBC News is a good example of this). If full screen high quality video playback is required, you need to factor this into pool designs and the PoC. As always though, benchmark it yourself during the PoC phase, as your mileage will inevitably vary.

What applications will require OpenGL or DirectX support?

As we ascend to the more demanding groups of users, it’s key to know who requires extra grunt on their virtual desktop. This again affects pool design but also means that some consideration will need to be given to specialist graphics hardware to enable these users to work effectively in a virtual environment as they do in the physical world. Some examples of applications requiring this level of support include but are not limited to:-

(All OpenGL apps)

  • Adobe After Effects, Photoshop CS3/CS4, Premiere Pro
  • AutoCAD
  • Google Earth
  • Google SketchUp
  • Scilab
  • Virtools

DirectX seems to be typically used in gaming, so it’s debatable whether or not there’s a use case there in general environments. That being said, one recent customer teaches computer game design and coding, which would place this use case right in the middle of your View deployment.

What are my options?

There is a very good white paper from VMware that discusses graphics acceleration in View and is well worth a read. It focusses primarily on the NVIDIA GRID solution which I’ve discussed previously and is available here. So let’s say you’ve done your scoping exercise and completed a desktop assessment, where do you go from here? There are lots of resources out there if your Google-Fu is up to snuff that will tell you what the different varieties of hardware there are, I’m just going to provide a simple reference based on my testing.

Multimedia Playback

For good multimedia performance, look for a thin or zero client with the Teradici chip installed. I recently tested the 10Zig V1200P Zero Client with a customer and I have to say the multimedia performance was exceptional. This was partly due to the excellent bandwidth to desktop that the customer had, but also with the Tera2 chip installed, the PCoIP processing gets offloaded to a dedicated hardware device, in a similar way to say a TCP/IP Offload Engine works on a network card. Of course it goes without saying that performance with hardware vs software will always be superior and this was the case in my experience.

If you can’t stretch that far budget wise, ensure you follow best practice and have 2 vCPU per virtual desktop as video playback is a purely CPU based operation. If you run up Task Manager on two desktops, one uniprocessor and one multiprocessor, you should see the difference in CPU spiking during multimedia playback. Remember though not to oversubscribe vCPU – there is not a linear improvement the more vCPUs you add to a desktop, in actual fact you could slow the whole environment down. This is well known in the virtualisation community and there is a good explanation as to why here. So in short? No more than 2 vCPUs per desktop unless a specific use case calls for it, which would be dependent on multiprocessor desktop applications.

Applications requiring OpenGL or DirectX support

Primarily in the case of OpenGL, I have seen improved performance using NVIDIA GRID K1 cards operating in vSGA mode. As stated in a previous post, make sure you have the latest versions of ESXi, Horizon View and the NVIDIA VIB on the ESXi host. So how much better is hardware acceleration? A picture says a thousand words, so I’ve pictured below the output from Passmark 3D test that shows a desktop using software rendering and 128 MB video RAM and a desktop using hardware rendering with 128MB video RAM.

IMG_20141120_134042929 IMG_20141120_134035087On the same two desktops I’ve also got the 2D Passmark test results shown below:-

IMG_20141120_133950443 IMG_20141120_134005415

Apologies for the general rubbishness of the pictures, they were taken with a phone camera, but it gives you a sense as to what kind of performance you can expect with hardware acceleration. Thankfully Passmark is a free tool and can be downloaded from here.

How much Video RAM should I allocate to my pool?

Great question and somewhat subjective. VMware state that if you are using Windows 7 with Aero (and let’s be honest, most people are) that you should set a value between 64MB – 128MB in the pool settings (I couldn’t find any change in this advice for View 6). In my experience, this is fine for basic use cases, but where multimedia is required and especially good quality video playback, that won’t be enough. We talk all the time about right sizing View deployments, but one gap really in the analysis for me is to how to right size video RAM per pool.

I found a free utility called GPU-Z which is really useful for benchmarking performance during multimedia playback and determining the high watermark for video RAM usage. This can then be taken forward into the View design and ensure multimedia users have enough resource for their use case. GPU-Z can be downloaded here and is pretty simple to use. For my testing purposes, I ran it up during normal Windows navigation (starting and closing apps, web pages etc.)

gpuz-1

The above screenshot shows the tool running on a laptop and gives you the kind of idea of what information can be gleaned. The Sensors tab is the one with all the key information, the screen shot below shows the “idle” state of the GPU during normal operations.

gpuz-2

As you can see above, we’re already at 206MB of video RAM and we’re not really doing anything. If your pool is set to the recommended 128MB RAM, there is an obvious pinch point there already. This will lead to degraded multimedia performance as there isn’t sufficient resource. Playing a small screen video from the BBC News website, the video RAM usage climbs to 284MB as shown below.

gpuz-3-video-sm

And then finally on the full screen version of the video, the usage climbs still further to 363MB. So in this particular case, you’d be looking to set an initial high pool watermark of 370MB video RAM (for example) to give the user sufficient horse power for video playback in View. That being said, the results will most likely be lower in a virtual environment, so make sure you continue to monitor usage during the PoC phase to ensure that the pool video RAM size is neither under or over specced.

gpuz-4-video-lg

Conclusions

In summary, I’d say look at the following:-

  • Review end user requirements for graphics performance, including multimedia and application support
  • Spend time tuning PCoIP if you find that bandwidth is a constraint, but remember this is usually a balance and/or trade off between picture quality and playback smoothness. Audio quality is also affected by tuning maximum audio bandwidth
  • Conduct a PoC to level expectation appropriately
  • Use free tools such as Passmark and GPU-Z to accurately benchmark the environment and right size the capacity
  • Obtain eval units if you are going down the thin/zero client route and test out all the different use cases you know of to see which unit is most appropriate
  • vSGA with NVIDIA GRID cards can be very cost effective when applications require additional 3D resource
  • Consider the use of a Teradici APEX 2800 card to offload some graphics processing to dedicated hardware in the host (caveat : I haven’t tested this and Teradici – feel free to loan me one!)

18-11-14

download

UK VMUG Event Review

Yesterday I had the pleasure of attending the fourth UK-VMUG annual conference at the National Motorcycle Museum in Solihull. For those that didn’t make it, I’ve put together an event review for your viewing pleasure. Apologies for the crapness of the pictures, taken with my phone unfortunately!

 

Joe Baguley Keynote

IMG_20141118_091319

After a brief introduction from VMUG leader Alaric Davies, the day started with the now usual keynote from Joe Baguley, CTO for EMEA. This year the keynote was entitled “Rant as a Service” and after setting the scene for around 30 minutes, the key message is still around software defined enterprise. It was my interpretation that there was a small pop at a hyper converged company whose name may or may not contain nuts on the basis that EVO:Rail and EVO:Rack can give you the same level of support and performance without having to buy into a single vendor. I’ve been feeling for a while that there isn’t a lot of love lost between the two parties, and I don’t know if that’s true, but I don’t find it particularly helpful when constant implied barbs are being traded. Just my opinion!

The point of EVO:Rail is to have the infrastructure up and running within 15 minutes. The value here is that you can go to 8 partners and pick which stack and value add you want. It’s not a single vendor lock in as such, as most customers already have an existing relationship with the likes of Dell, etc. EVO:Rail is 2U in size and has four blades installed. Not dissimilar to Nutanix and UCS in that respect, though of course the UCS chassis has a larger form factor. For larger installations or special use cases such as VDI with NVIDIA graphics, the bigger EVO:Rack will be required.

One interesting line was the ongoing idea now that abstraction and obfuscation takes place in as much as key components such as disks and raid controllers are being replaced by software and public and hybrid cloud solutions. This of course is becoming transparent to the “end user” as we move towards a hybrid cloud type of world. If a disk controller fails, it’s OK, software can take care of that. Lose a data centre? That’s OK too, we’ll just move to another one in the background. I’m not sure we’re totally there with that one yet, but it’s an interesting concept none the less.

Then we had the discussion about what is “Enterprise Scale” these days? As consumer electronics demands increase exponentially (photo uploads, data requirements, data production, etc) then most things these days really are “Enterprise” grade as they have millions of people using them daily, not just the tens, hundreds or even thousands in an “enterprise” environment.

White boxes are also now are taking the place of large monolithic proprietary solutions. EVO:Rail again was mentioned as an example of this, where you get a pre-built, predictable and validated vSphere environment from whichever hardware vendor you prefer. The irony that you’re still locked into VMware technologies was missed at this point, but I think I see where the point is here.

IMG_20141118_091425

VMworld Update – Julian Wood

I then went to the VMworld Update session with Julian Wood. One thing I’d have to say is that there was too much to fit in for 40 minutes. That’s not Julian’s fault – as he noted, once you look into it, there are so many product releases, updates and acquisitions to keep track of that you could spend all day talking about it! There was some discussion around the vRealise suite (I’m not spelling it with a “z”!),  what that means and how there is on and off premises solutions for that now. vRealise is essentially the management and automation tools bundled into a suite, so products such as vCloud Automation Center, Log Insight, vCenter Operations Manager etc.

CloudVolumes was also discussed, where applications are installed to a VMDK and then this VMDK is connected and presented to desktops in a fraction of the time it takes to do ThinApp etc. As I was listening to this, I started to think “what are the storage requirements though?  Read intensive, or are blocks cached.  How does this work?”. “Do we require any back end infrastructure such as MS-SQL etc.?”

On the EUC side, big strides continue to be made and VMware are really competing with Citrix in the application presentation stakes, as well as adding further improvements to the core View product, including Cloud Pods (or Linked Mode for View, as I like to call it!), where you can break the current scalability limits but also provide an additional site failover for virtual desktops if required, plugging one gap in the previous product set.

vSphere Futures – Duncan Epping

The next session was with Duncan Epping. His sessions are always well attended as he’s usually on the bleeding edge of what the company is doing internally, plus I’ve found him to be pretty honest in his responses to some issues that have cropped up, especially around Virtual SAN. I made quite a few notes around what was discussed, and it’s probably easier to break them down into bullet points:-

  • All flash Virtual SAN coming, to increase the configuration options for two slot blades, where currently you need flash for cache and spinning disk for content
  • Virtual volumes (VVols) policies coming that will based per VM
  • This functionality will be based on an array that supports virtual volumes
  • IO filters directly in the hypervisor for those arrays not VVol aware
  • Storage DRS VM IOPS reservations, so we can migrate workloads to other storage if reservations are not met
  • Storage DRS has better awareness of thin provisioning, dedupe and replication
  • Resource and Availability Service is a new web based tool that uses exported DRS settings to simulate failure of resources and ensure design is correct, validating such things as Admission Control settings
  • FT support for up to 4 vCPU
  • No more vLockstep or shared VMDK for Fault Tolerance,  10Gbps networking will be required
  • The ability to vMotion “anywhere”, requirement is that both vCenters must be in same SSO domain
  • vMotion has a 10ms latency tolerance now, working on 100ms tolerance for long distances
  • The vCenter Appliance will scale as well as Windows version now, and will be the future of vCenter releases
  • SQL server supported externally for the vCenter Appliance
  • Task pane will be coming into bottom of Web Client
  • Less nested right click options to make the Web Client interface cleaner
  • Task  concurrency, performance  charts and other features will be introduced into the Web Client
  • Linked Mode will be available for the vCenter Appliance
  • Content library for ISOs etc, replicated across sites. Also includes templates, OVFs etc. Same as Hyper-V Libraries, by the sounds of it

IMG_20141118_105540

One very interesting thread was around Project Fargo. This in essence is a “re-imagining” of the snapshot process and will allow for the creation of Windows virtual machines in around 2 seconds. In the lab, Linux VMs were spun up in less than that, the overhead on the Windows side was mainly down to customisation and joining AD etc. Another way of thinking about it is “Linked Clones on steroids” in the sense that you have a parent virtual machine and lots of child virtual machines. Duncan’s blog entry as linked above goes into some good detail on what you can expect from this initiative.

Horizon View Architecture and Design – Barry Coombs & Peter Von Oven

I then went to the session by Barry Coombs and Peter Von Oven about Horizon View Design and Architecture. This session wasn’t really a “death by PowerPoint” session, but more a key points and brief discussion at a high level as to what you should be looking for in a good Horizon View design. There are always little nuggets or anecdotes that can be useful that maybe you haven’t come across before that only really come out of experience. One good point from this session was that you should never let the IT team speak on behalf of the end users, so in other words, don’t assume IT know necessarily what the user experience is like, because they can’t know every individual use case.

The key point of performing a desktop assessment phase and also a proof of concept was also re-iterated, and I can’t agree with this enough. To chat to IT and some end users is not enough. It’s useful as part of the whole engagement, but you also need key performance metrics and also a proof of concept to see what works and what doesn’t work. Think of a PoC as the first draft of a document that requires lots of iterations to get it “just right”. To perform a desktop assessment and some stakeholder interviews and then think you can roll an effective VDI environment first time out of the gate is total fantasy.

Any VDI deployment (whether it’s View or AN Other solution) should be an improvement on the current “physical” end user experience. Again this is a given. If you’re spending time and money replacing a solution people are familiar with and comfortable with, it needs to be visually an improvement on what they already have, or the solution will simply acquire a “bad name”. One interesting idea was the notion of having a “Departmental Champion” – an end user who wants to positively influence the outcome of the project. They can interface with other users and help cascade information and feedback backwards and forwards. This can give you a view inside the PoC that you would not normally have.

Some other brief points included not forgetting to factor in VM and graphics overhead when right sizing a solution, these are commonly forgotten about (guilty!) and user concurrency should be measured in advance. Generally I use the rule of thumb of 80% concurrency, but in an organisation that has shift patterns, this may not be appropriate. Make sure the solution scale!

IMG_20141118_131653

EUC Update – Peter Von Oven

My next session was another EUC session, this time with Peter Von Oven from VMware. Again, a lot of key messages came out pretty thick and fast, so a bullet point summary is included below:-

  • VMware’s strategy is still the three pillar strategy of SDDC,  EUC and Hybrid Cloud
  • AppVolumes (formerly known as CloudVolumes) will be available in December
  • Horizon Workspace can disable icons based on physical location. It’s context aware in that sense. So for example, R&D portal is not accessible from Starbucks, but is from a corporate LAN
  • Horizon Workspace provides a central point of management
  • AppVolumes will be in the Enterprise Edition of Horizon View
  • View 6 makes it possible to co-exist and transition from XenApp environments
  • Windows 2008 or 2012 server required for RDSH Application Publishing, and can mix and match if required
  • Easier than upgrade to XenApp 7.5 in the sense that a new infrastructure does not need to be stood up
  • Seamless application remoting,  even on Mac
  • Use vCOps for View and do a 60 day assessment of your environment – though I’m not sure you get the same level of information as you do with say Stratusphere FIT
  • Use thin clients not zero for unified communications in VDI
  • Fully supported by Microsoft for Lync over PCoIP
  • Webcam and mic done using USB redirection
  • Use case for Thinapp is portability and isolation, AppVolumes for performance
  • Application catalogue allows user self service of applications, can remove after 30 days etc
  • Workspace Suite is Horizon + AirWatch, includes Horizon Advanced for Workspace
  • vGPU like Citrix,  coming Q1 next year – vGPU is covered here and is essentially dedicated hardware VGA acceleration but with the consolidation ratio of sVGA. Still uses NVIDIA driver for application validation and support
  • Horizon Flex out in December, delivers containerised desktops in much the same way as the old VMware ACE product
  • No dependency for Horizon Flex on Mirage at the back end
  • Requires Flex policy management server and provides time limits, grace period, remote lock and wipe, USB lock down, etc

IMG_20141118_140229

Cisco and VMware – Chris Bashforth

For my final breakout of the day, I went to the Cisco partner presentation on UCS and VMware View. I have to say I didn’t find this session all that useful. I don’t know if it was due to the graveyard slot at the end of a long day or if it was just the general dryness of the topic, but I never really felt like the audience engaged with the speaker and the atmosphere fell a little flat. We were given a brief overview of UCS for those who have never seen it before and then a quick run through of the blade and chassis models available and which are recommended for VDI deployments.

I’m still quite new to UCS having been a HP guy all of my career, so there were some interesting items in there but I didn’t feel I got a lot out of this session and left a little disappointed. For those folks wanting to use NVIDIA GRID cards in their UCS deployments, you will need to use C class rackmount servers for this purpose, with two slots available per server for this purpose. B class blades are densely packed and simply do not have the space to accommodate this card.

One thing to correct is the speaker’s comment that NVIDIA vDGA will support 8 users per server – this isn’t true. Direct passthrough means that you connect the physical VGA card to the virtual desktop on a 1:1 basis. I can only assume he got mixed up with the upcoming vGPU which will be a similar passthrough arrangement, but with the ability to get a higher consolidation ratio of up to 8. If I misinterpreted these comments, please feel free to let me know.

IMG_20141118_154030

Closing Keynote – Chris Wahl

The closing keynote was from Chris Wahl, industry legend and double VCDX. The force is strong with this one! The session was entitled “Don’t Be a Minesweeper”. I went into the session wondering what the correlation was between stealing bits of beer from tables (my definition of a Minesweeper) and the IT industry, but it turns out he was referring to the cheesy clicky clicky game of previous Windows’ vintages. The general gist was that automation is the way forward, we’re seeing that now, and it pays dividends to be ahead of the curve by learning some scripting now. Whether that be PowerShell, PowerCLI, Python or anything else.

I did particularly enjoy Chris’s attempt at using British slang. Top marks to him for differentiating between bollocks (bad) and dog’s bollocks (very good). It’s not always easy for an American to grasp such as concept depending on whether or not said objects are canine connected, but I think he did pretty well!

IMG_20141118_164222

Summary

Overall it was a very good day and a hearty well done to the VMUG committee who put it all together. This was my third VMUG-UK and each time it just keeps getting bigger. I don’t know how many showed yesterday, but I heard on the Twittervine that nearly 600 had pre-registered, which is absolutely fantastic. I did wonder if the event is now starting to outgrow the venue – the solutions hall was packed and difficult to navigate and lunch and brew breaks got quite cramped for space, but that’s a relatively minor thing.

I didn’t get much chance to look at the booths in the solutions hall, but it’s difficult when you’re a partner with long standing relationships with vendors to have something new to talk about sometimes. I did however get to see some old ex-colleagues as well as chatting to some folks I hadn’t seen in years, which was great.

 

08-11-14

Adventures in NVIDIA GRID K1 and Horizon View

Just had a really interesting week helping a customer with a proof of concept environment using Horizon View 6 and an NVIDIA GRID K1 card, so I thought I’d blog about it. You know, like you do. Anyway, this customer already had a View 5.2 environment running on vSphere 5.1 on a Cisco UCS back end. They use 10 Zig V-1200 zero clients on the desktop to provide virtual desktops to end users. As they are an education customer providing tuition on 3D rendering and computer games production, they wanted to see how far they could push a K1 card and get acceptable performance for end users. Ultimately, the customer’s goal is to move away from fat clients as much as possible and move over to thin or zero clients.

I have to say that in my opinion,  there is not a great deal of content out there about K1 cards and Horizon View. One of the best articles I’ve seen is by my good friend Steve Dunne, where he conducted a PoC with a customer using dedicated VGA. In our case, we were testing out sVGA or Shared VGA as the TCO of dedicated would be far too high and impossible to justify to the cheque signers. This PoC involved a Cisco UCS C240-M3 with an NVIDIA K1 card pre-installed. The K1 card has four GPU cores and 16GB RAM on board and takes up two PCI slots.

I’m not going to produce a highly structured and technical test plan with performance metrics as this really isn’t the way we ran the initial testing. It was really a bit more ad hoc than that. We ran the following basic tests:-

– Official 720p “Furious 7” trailer from YouTube (replete with hopelessly unrealistic stunts)

– Official 1080p “Fury” trailer from YouTube  (replete with hopelessly unrealistic acting)

– Manipulating a pre-assembled dune buggy type unit in LEGO Digital Designer

– Manipulating objects in 3DS Max

When we started off at View 5.2, we found that video playback was very choppy, lip sync was out on the trailers and 3D objects would just hang on the screen. Not the most auspicious of starts, so I ensured the NVIDIA VIB for ESXi 5.1 was the latest version (it was) and ESXi was at 5.1 U1. We made a single change at a time and went back over the test list above to see what difference it made. We initially set the test pool to be hardware 3D renderer, with 256MB of video RAM. The command gpuvm was checked to ensure the test VMs were indeed assigned to the GRID K1 card and we used nvidia-smi -l to monitor the GPU usage during the testing process.

Remember that when you configure video RAM in a desktop pool, half of the RAM is assigned from host memory and the other half is from the GRID K1 card, so keep this in mind when sizing your environment. The goal of the customer is to provide enough capacity for 50 concurrent heavy 3D users per host, so two GRID K1 cards per host is what they’re looking at, pending testing and based on the K120Q profile.

So after performing some Google-Fu, the next change we decided to make was to add the Teradici audio driver to the base image, the main reason for this was that there is apparently a known issue with 10Zig devices and audio sync, so we thought we’d give it a try. Although audio quality and sync did improve, it still didn’t really give us the results we were looking for.

Having gone back to the drawing board (and forums), the next change we made was to upgrade the View agent on the virtual desktop from 5.2 to 5.3. Some customers in the VMware Communities forums had observed some fairly major performance improvements doing this, without the need to upgrade the rest of the View infrastructure to 5.3. We did this and boy did things improve! It was like night and day and the video was much improved, barely flickered at all and the audio was perfect. At this point we decided that View 5.2 was obviously not going to give us the performance we needed to put this environment in front of power users for UAT.

The decision was taken to upgrade the identical but unused parallel environment at another site from vSphere 5.1 and View 5.2 to vSphere 5.5 U2 and View 6.0.1. The reasoning behind this was that I knew that PCoIP had improved markedly in 6.0.1 from the 5.x days, plus is meant we were testing on the latest platform available. Once we upgraded the environment and re-ran the tests, we saw further improvement without major pool or image changes. We updated VMware Tools and also the View Agent to the latest versions as part of the upgrade and the customer was really impressed with the results.

In fact, as we were watching the 720p and 1080p videos, we remarked that you’d never know you were watching it on a zero client, basically streamed from a data centre. That remark is quite telling, as if a bunch of grizzled techies say that, end users are likely to be even more chirpy! We also performed more rudimentary testing with LEGO Digital Designer and also 3DS Max, with improved results. The PoC kit has now been left with the customer, as really we need subject matter experts to test whether or not this solution provides acceptable end user performance to totally replace fat clients.

What is the takeway from this?

The takeaway is that  in my opinion, VDI is following a similar path to what datacentre virtualisation followed a few years back. First you take the quick and easy wins such as web servers and file servers and then you get more ambitious and virtualise database servers and messaging servers once confidence in the platform has been established.

VDI has started with the lighter use of the “knowledge user” who uses a web browser and some Office applications to a basic level. Now this target has been proven and conquered, we’re moving up the stack to concentrate on users who require more grunt out of their virtual desktop. The improvements in Horizon View, ESXi and now with the support of hardware vendors such as NVIDIA and Teradici, native levels of performance for some heavy use cases can be achieved at a sensible cost.

That being said, running a PoC will also help finding where the performance tipping point is and what realistic expectations are. In our case, early testing has shown that video, audio and smaller scale 3D object manipulation is more than feasible and a realistic goal for production. However, much larger vehicles such as the Unreal Development Kit may still be best suited to dedicated gaming hardware rather than a virtual desktop environment. The one thing of course I haven’t mentioned is the fact that the customer is GigE to the desktop and have a 10GigE backbone between their two sites. This makes a huge difference. I doubt we’d have seen the same results on a 100Mbps/1Gbps equivalent environment.

The customer will be testing the PoC until the end of December, hopefully I can share the results nearer the time as to whether or not they proceed and if so, how they do it. Hopefully for anyone researching or testing NVIDIA GRID cards in a VDI environment, this has been some help.

NVIDIA GRID Card

 

17-10-14

VMware Flings are a curious concept. Not quite production code, yet not quite hobby code either. However, it’s a veritable goldmine of useful tools and applications to complement, manage and augment your VMware estate. For example, Auto Deploy is a very useful feature of vSphere, but I don’t see a lot of it in the wild as most folks think it’s a little too “command line” to be worth the effort. Having to do most of the work in PowerShell puts a lot of people off (and I have to say, I don’t blame them. I winced when it came up on my VCAP-DCA exam). So what has someone done? Written a GUI tool for it. A solution that fixed a problem, not a solution looking for a problem.

I’m not here to discuss that one, however. I’m primarily an EUC boy and there are several useful (did I say free?) tools on VMware Flings that can bail out your average Horizon View Joe. One of which is ViewDbChk. Yes, I know it doesn’t have a fancy name, but it’s a true bacon saver in the sense that I found myself saying “Wow, ViewDbChk really saved my bacon there”. As View folks will know, data is splattered across various different repositories, including a database for View Events, a database for View Composer, a database for vCenter and the ADAM/LDS database, replicated between the Connection Servers.

This is where ViewDBChk comes into play. On occasion, what is seen in vCenter and what is shown in View Administrator can get a little out of whack, and this can slow you down or cause you unforeseen issues. For example, if you’re plain impatient like me, you may find that a pool deletion has apparently hung, so you help it along by manually torching some VMs and replica VMs from vCenter (in the lab, obviously. Never in the real world!). At this point the delete task can get stalled because there are now no objects for View to refer to, or things could be the other way around where View has left orphaned VMs behind in vCenter and you just need to perform a little housekeeping. ViewDbChk is really useful in performing this clean sweep and keeping your vCenter and View installations in perfect sync.

To use the tool, first download it from the Flings website, unzip and copy the files over to any Connection Server in your View installation you want to tidy up. Then you need to start a command prompt as ViewDBChk is a command line tool, ensuring you right click and use the “Run As Administrator” option, otherwise you’ll get some fairly cryptic errors and this caught me out at first. From your Administrator command prompt, run ViewDbChk –scanMachines. This will run through and check your View estate for issues, as shown below (desktop/pool names redacted!) :-

 

scan1

 

From this point on it’s really just a case of following the bouncing ball. One word to note is that for me, just typing “y” or “n” for the questions did not seem to work properly, so I just typed in “yes” and “no”. Obviously for the tool to work it’s magic, you kind of have to let it do it’s thing 😉

Once some duff VMs have been found, you are asked to disable the pool affected. Then as shown below, the affected VMs are listed with the reason for the ongoing issue, in this case the master VM or snapshot couldn’t be found in vCenter (ooops!). At this point the tool will ask you if you want it to tidy that up. Say yes and the tool will do the rest. Once all affected VMs have been cleaned up, you are then prompted to re-enable the pool, as per the screen shot below.

scan2

This is all well and good for a couple of VMs, but what if your View environment is in a stinky old state? Well you can add switches to the tool to force through a tidy up without having to constantly say “yes”. Also worth bearing in mind is the in built limit of 5 VMs before you have to run the tool again. To circumvent this, use the command ViewDBChk –scanMachines –force –limit 100 to set an upper limit of 100 VMs (or whatever you deem appropriate).

Props to Griff James for this cool tool, and happy flinging!!

 

15-08-14

VCAP-DTA Consolidated Study Guide 1.5 Released

Now that I have finally passed, I’ve been back over the Consolidated VCAP-DTA Study Guide and updated it. I’ve done some small formatting changes so it’s a little easier to read, as well as correcting a few typos I found and adding in the two quick reference tables for PCoIP and Windows image tuning that I blogged about previously. I’ve also added in a few exam tips for those thinking of sitting soon. As this road has now come to a bit of an end, I won’t be maintaining this guide from here on in until the VCAP6-DTA is released, which I expect to be a little way off just yet.

I’ll also update the sample questions guide, but that may follow in a week or so.

Enjoy!

 

14-08-14

VCAP-DTA Exam Experience (Redux)

So I got back about an hour ago from my second sitting of the VCAP-DTA exam in Leeds. As regular readers will know, I sat it a couple of weeks ago and failed. The score report I got back gave me some suggestions on the areas I wasn’t quite so hot on, so I spent some extra time going back over those and making sure I understood them (two factor authentication and group policy settings to name but two). I had the mindset that if I didn’t pass it today, it would be a would be a while before I’d be back as my employer wants me to get up to speed with the latest MCSE track and quickly, meaning I wouldn’t have the bandwidth (or the mental capacity!) to take on both at the same time.

Nor did it help that I was running a little late, I’d had a coffee and an early lunch because as usual, my appointment spanned over lunch time and I didn’t want to get hungry. By the time I set off for the test centre, it was getting close to my appointment time start so I had to run the last couple of hundred yards to make it on time. With that and a coffee swilling around inside me, my eyes were on stalks when the exam started!

I’m not sure how large the pool of questions is, but I did get a few I’d had previously, including some I came a little unstuck on. I tried to move on if I felt I was getting bogged down, with the intention of picking up as many points as possible elsewhere. Somewhat surprisingly, by the time I’d completed question 23, I still had 30 minutes left. So I went back, quickly checking my answers and referring to the admin guide on the ones I was stuck on.

It turned out to be a pretty effective strategy, although I did go back to delete and restart one “answer” I’d started and then ran out of time, as the desktop refresh was a little laggier than last time, and I couldn’t quite complete the task in time.

I came out feeling tense as I thought I’d passed last time and didn’t,  and I was mindful that I hadn’t completed all tasks with the loss of points that entails. Anyway, I got the score report back quickly again (thanks Joshua!) and this time thankfully I’ve passed! So now I have four VCAPs and I can afford to dream of the far off pot of gold that is the VCDX. I’m not going to think about that yet, as I’ve a box full of Microsoft exams to get done before I can get to that. Still, in the words of Peter Venkman, “we came, we saw, we kicked it’s ASS!”

 

G-1136 - We came, we saw, we kicked its ass

 

07-08-14

VCAP-DTA PCoIP Tuning Quick  Reference

Following on from my previous post regarding tuning your Windows 7 image for the purposes of the VCAP-DTA exam, I lifted the following table from the View 5.2 Best Practices guide. In the exam you don’t have a lot of time and you’re probably going to have to tackle a question at some stage about PCoIP performance or be asked to tune it for certain restrictive network conditions. The table below has a handful of settings which should help you go a decent way to getting good marks for this question:-

SETTING

DEFAULT RECOMMENDATION

DESCRIPTION

Build to lossless

On

Turn Off

Enables the ability to enable or disable build to lossless
Session Audio BWlimit

500Kbps

50 – 100Kbps

Reduces bandwidth usage of audio with usable quality
Maximum frame rate

30

Change to 10-15 based on network settings

In WAN conditions, this will be helpful for video playback and fast graphics operations
Maximum sessionbandwidth

n/a

Set per network conditions

Good for better bandwidth estimation
Client side cache size

250MB

Set per client-side memory available

This allows you to configure the client side image cache size

05-08-14

Windows 7 Desktop Tuning Quick Reference

Another item that kicked me a bit in the VCAP-DTA exam (as per the RADIUS post below) was tuning the Windows 7 desktop image for VDI. I mean, that could be a million settings, couldn’t it? Where do you start? You could take the whole of the three hours of the exam tweaking and changing! While going through a View best practices white paper for another piece of work that I’m doing, I came across a handy chart for a handful of basic items you should tune on your Windows 7 desktop, which is a damn sight easier than remembering hundreds of registry keys and group policy settings!

 

PARAMETER CONFIGURATION
vCPU 1 for WinXP and Win7 and Win8

2 for multimedia intensive apps

Memory 512-768 MB for WinXP

1GB for 32-bit Win7 and Win82GB for 64-bit Win7 and Win8

3GB for Win7 and Win8 64-bit for memory-intensive apps

Network adapter VMXnet3, flexible
Storage adapter PVSCSI or LSI Logic SAS
VMware Tools Latest installed
Visual settings “Adjust to best performance”

Disable Animations for Windows Maximize and Minimize operations

Use default cursor for busy and working cursor

Disable services Windows Update, Super-fetch, Windows Index
Group policy settings Disable Hibernation

Screensaver to None

Other settings Turn off clear-type

Disable fading effectsDisable auto-play and external drive caching for quick release

Disable last access timestamps (1)

 

1) Set the registry key HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/Filesystem/
NtfsDisableLastAccessUpdate to 1

31-07-14

Configuring RADIUS Two Factor Authentication with Horizon View

One of the things I fell short on in the VCAP-DTA exam was RADIUS and two factor authentication. I hadn’t really done much with it for the thick end of 10 years and when it came to the exam, I just hadn’t worked through it enough to remember what all the moving parts were and how they worked. Once I failed the exam, I wanted to go back and re-review what all the moving parts were, how they hung together and basically how to set it all up from start to finish.

Like most people with home labs, I have a couple of Windows Servers performing multiple roles, including Domain Controller, Certificate Authority etc. One thing you can do to practice configuring RADIUS authentication for Horizon View is to install the Network Policy Server role on one of your Windows boxes and configure RADIUS. When Googling how to do this, I found a really good (and up to date) white paper on VMware’s website with clear and concise instructions about how to configure the Windows Server end and also the Connection Server end to make two factor authentication happen.

Literally from start to finish, the process took no longer than around 10-15 minutes. Well worth a run through before the VCAP-DTA exam to make sure you really understand RADIUS components and how Horizon View hooks into it. The guide also covers RSA Authentication Manager if you want to practice that, but I wouldn’t expect to see that option on the exam. Worth knowing though, just in case.

The white paper (PDF) is available here.