15-04-16

VMware VCAP6-DTM Design – Exam Experience

VMW-LGO-CERT-ADV-PRO-6-DSKTP-DESIGN-K

I just got back from sitting the beta of the VCAP6-DTM Design exam, so I thought I would give a bit of feedback for anyone thinking of doing it any point in the future. Obviously the caveat to this post is that the exam today was a beta (so still very much in development) and also that it’s still under NDA, so no real specifics, I’m afraid.

The exam itself was 38 questions over 4 hours, although I completed it with about an hour to spare. I got the invite a couple of weeks ago and thought “why not?”. It’s only eighty quid, and you don’t often get the chance to sit a VCAP for that low fee.

The design exam takes the form of drag and drop and the design canvas questions. I kind of felt under no real pressure to deliver on this exam – I’m not currently doing much in the way of the VMware stack, so it was almost a bit of fun. I remember sitting the VCAP5-DTD (as was) and feeling a lot more time pressured and knowledge pressured, but I reckoned it up and it was over three years ago now! Time flies, and I’m certainly much more experienced, not just as an architect but also with View.

I think in the released exam, you only get 6 design canvas questions, but in today’s beta I got a lot more than that! I can’t recall exactly how many, but there were at least a dozen, I’d say. I’m not sure if that was just a data gathering exercise or if that is the way the exam will go, but best to know your reference architectures if you’re planning to sit this exam later in the year.

The exam also seemed to be much more in tune with the way the VCDX is done, in respect of assumptions, constraints and risks and also requirements. You also need to understand the differences between logical, conceptual and physical designs and also functional and non-functional requirements. I think this exam will prepare you much better for a VCDX crack, I can’t honestly remember if the original VCAP5-DTD ran along those lines.

In terms of tech, a good chunk of the exam is made up of existing View technologies, so understand all the core components well:-

  • Connection Servers
  • Security Servers
  • Desktop Pools
  • Full and Linked Clone Desktops
  • 3D Graphics
  • ThinApp
  • RDSH (quite a lot of content on that)
  • View Pods
  • Pod and Block Architecture
  • Workspace

I’ll be honest and state right now I’ve never touched AppVolumes or Mirage, less seen it in the field. I spent a chunk of time over the last couple of days looking at some of the linked documentation from the exam blueprint, such as reference architectures, use cases and also the product documentation.

As it’s a design exam, it takes an architectural approach so you don’t need to know which vdmadmin command to run to perform a given task, for example. What you do need to know is what components do what, how they link with each other and what he dependencies are. It’s a lot more in depth than a VCP, but if you have spent any time in the field doing a requirements analysis and then a subsequent design and delivery, you should be fine.

I didn’t take a lot of care with my answers in the sense that I didn’t really agonise over them. I did check them before I moved on, but as I said, I felt no pressure and I really just went with my gut instinct. In most cases, that’s usually the right way.

In terms of non-View components, I’d say you need to know and understand the high level architectures of AppVolumes and Mirage. I can’t recall any questions on the Immidio product, so maybe that didn’t make the cut or maybe my question pool just didn’t contain any. Latterly though, I did get some questions that referred to the “traditional” Persona Management. Wouldn’t hurt to have a basic understanding of Immidio though (or whatever it’s called these days).

There are a few questions where you need to count your fingers – there is no access in the exam to a calculator, which is a massive pain in the arse. Microsoft exams always have it, not sure why VMware seem intent on exam candidates getting their fingers and toes out. Let’s be honest, you wouldn’t do that in the field, would you? I did comment back that a calc would be very handy for someone like me who is incredibly lazy when it comes to arithmetic!

So to sum up, not massively different from the VCAP5-DTD I remember, with core View still very heavily tested. As I mentioned previously, make sure you have a good working knowledge of AppVolumes and Mirage in terms of the architecture and what the component roles are. Probably wouldn’t do any harm to understand and remember what ports are used in which scenarios, either. Configuration maximums too – you’ll need to know how many users a given component will support when designing a solution for a specific number of users.

I won’t get the results now until 30th June or so (that’s what the beta exam page says, anyway), so we’ll see. Do I think I’ve passed? Who knows. I’ve given up predicting things like that after I did the VCP-CMA beta thinking I’d done well, only to crash and burn. It has no massive effect on me anyway, as I’m currently 100% focused on AWS and Azure, but it would be nice to top up my collection of VCAPs further. As always, any questions, hit me up on Twitter but just don’t ask for any exam question content specifics.

Links

Advertisement

23-03-15

Upgrading the home lab Part I : vCenter Server

vSphere 6.0 has finally shipped, so I decided to take the plunge and upgrade the home lab to vSphere 6.0. In the next couple of posts, I’ll outline the steps required to perform the upgrade, plus any issues I encountered along the way. I think most people know that most articles I write are focused around VDI, so let me express this straight out of the gate. vSphere 6.0 does not support Horizon View until release 6.1. So basically, don’t upgrade any environments to vSphere 6.0 where View is in use, stuff will probably break. When is View 6.1 out? I don’t have a clue right now, but seeing as the release notes have been posted up, I can’t imagine it’s too far away.

If you’re project planning and you need to have certified upgrade paths (I know some project documentation requires this in some companies), the current (at the time of writing) interoperability matrix result is shown below:-

Compatibility Matrix

 

So my home lab is a very simple affair indeed. It comprises a single host which is a Dell PowerEdge 1435SC with 32GB RAM and two Opteron processors. Old hat I know, but it gets the job done. For those wondering how I deal with power management on such old kit, it’s simple. I turn the bugger off when I’m not using it! As I’m often on the road, I don’t see a lot of value having the beast humming away in the attic when I’m not around to use it.

Anyway, that aside, it’s currently on ESXi 5.5 U2 and runs the vCenter Server Appliance. I chose the appliance because it’s quick and simple, and I don’t have to faff around with Windows licenses. I know Linux quite well, so I don’t have any fear knocking around on the command line. In vSphere 6.0, the back end architecture of vCenter changes somewhat. If you recall, in vCenter 4.x and 5.0, everything was pretty much just baked into one installer. In vSphere 5.1 and 5.5, Single Sign On was broken out (and made good in 5.5!), as was the Inventory Service to provide a more modular environment should you wish to break things out a little for scalability and to mitigate the “all eggs in one basket” factor.

Further to that, vCenter 6.0 now has the concept of the “Platform Controller”.  Put simply, the Platform Controller are infrastructure or access services such as SSO, License Service, Lookup Service and the Certificate Authority. vCenter Server is basically everything else, so Inventory Service, PostgreSQL, Web Client, Dump Collector, et al. For my simple purposes, I’m just going to upgrade from my current vCenter 5.5 appliance to the 6.0 appliance, I don’t really need to start making the design overly complex. In fact, because it is just a lab environment, I’m not massively bothered if the upgrade process blows up, I can just rebuild from scratch. Obviously in a live environment, I’d be slightly more circumspect!

One important caveat to note is in the VMware documentation:-

You cannot switch the models after deployment, which means that after you deploy vCenter Server with an embedded Platform Services Controller, you cannot switch to vCenter Server with an external Platform Services Controller, and the reverse.

For full information of the pros and cons of either method, please refer to the product documentation. I’m not going to go into that level of detail here. What is reassuring for me with my one host and a dozen VMs is the following comment in the documentation:-

vCenter Server with an embedded Platform Services Controller is suitable for most environments.

Cool, I feel kind of validated now. I couldn’t see at first blush how the sizing of your environment effects your design decision, I suspect it’s more to do with geographical constraints, multiple vCenters and other VMware products that integrate with it, such as vRealise Automation. More on that in the future once I understand it better!

The Appliance Upgrade Process

The process of upgrading your vCenter appliance from 5.x to 6.0 is actually more of a migration than an upgrade. In essence what you’re doing is spinning up a new appliance as 6.0 with temporary IP address information, using SSH to copy over the database from the source 5.x appliance (and historical data if you so choose), changing the target appliance’s IP address to the source’s address and then dropping the source 5.x appliance.

Meeting Prerequisites

As you might expect, there are several prerequisites to be met before actually copying over any files or making any changes. First and foremost – have a backup and no, a snapshot is not a backup! By all means take a snapshot of your vCenter Server prior to starting the process, but have a block based backup too, whether that’s via Veeam or any other backup method. Don’t rely on snapshots. If you do, you’re upgrade deserves to fail!

Again the product documentation is the best place to refer to, as I’m sure over time these will change as experiences come back from the field when the product is being deployed. Once pre-requisites have been met, we’re hot to trot and ready to install our vCenter Server 6.0 appliance.

Download the appliance ISO file from MyVMware (at the time of writing this is VMware-VCSA-all-6.0.0-2562643.iso) and you may notice from the off we’re not downloading an OVA/OVF as we did previously. This is because the upgrade method is slightly different. Instead we’re going to take the ISO and mount it locally on our Windows machine (if you’re on Windows 8.1, you can right click and select Mount to mount the ISO to your DVD drive) as shown below:-

mount-vcsa

 

Alternatively, you can right click and extract with 7-Zip (for example) and create a source directory of files. However, for my purposes, I’m going to keep it simple and mount the ISO in Windows. Once mounted, we need to navigate to the DVD drive and go to the \vcsa folder. In here is the installer for the Client Integration Plugin, which we will need for this process. As a good habit, don’t forget to right click the installer and select “Run as Administrator”, as shown below:-

client-runasadmin

You’ll need to close any running browsers as plugins are installed, and then step through the simple installer, which should take just a minute or so. Once this install is complete, no reboot should be required and we can jump back into the root of the installation DVD and run the main installer vcsa-setup.html. I ran this with IE, I don’t know how well it works with other browsers. You will need the browser plugin access to start the installer, so click “Allow” (if you’re using IE):-

allow_installer

You should then be greeted with the option to install or upgrade. If you don’t see this screen, go back and check your client integration plugin installation (a reboot may help):-

install

Time for another quick sanity check at this point that the pre-requisites have truly been met. In my case I’m running a 5.5 version of the virtual appliance, as shown below:-

old-vc

 

so I’m good to go:-

sanity-check

Now to the meat and potatoes of the installer itself, and a nine part wizard now has to be negotiated in order to kick the upgrade process off. First up is the usual EULA. I read this thoroughly (twice!) and consider myself happy with it and click “I accept..” and click Next:-

part1-eula

Next I need to input details of the ESXi host to which I’d like to push out the new virtual appliance. Note the caveats listed at the bottom of this step – disable lockdown or maintenance mode and if you’re using a VDS, initial deployment of the appliance must be to an ephemeral port group. Click Next:-

part2-esxi

Accept the certificate warning by clicking Yes in order to continue:-

part2-cert

Your host credentials will then be validated as shown below:-

part2-creds

Step 3 is setting up the appliance name and SSH options. I’m calling mine the highly imaginative and original name of “vCenter” and I’m turning SSH on because it’s a lab and I’m not massively fussed about security. In the real world, leave this disabled unless you specifically need it. Click Next to proceed.

part3-vm

Step 4 is configuring the source information for the migration, so your existing vCenter Server. In this screen you need to enter IP/DNS details of the current appliance, version number, your administrator@vsphere.local SSO password (and if you’re a buffoon like me and forgot what the password was, you can reset it using this KB!), the root password for the appliance and host information along similar lines. You can optionally choose to migrate performance and other historical data. I’m going to enable this option, and I can’t think of any great reason in the real world why you wouldn’t do this (unless your database is mah-oosive). Before you proceed, check the caveats at the bottom of the page – that lockdown mode and maintenance mode is disabled and the source appliance’s DRS setting is disabled so it doesn’t move off that source host part way through. Click Next:-

part4-source

Once the credentials have been accepted, take care to check the information presented to you on what happens post upgrade. The new appliance will inherit the old appliance’s IP address. Also make sure no firewalls prevent SSH access between both appliances as this is the transfer method used during the migration/upgrade. The installer will start the SSH process will be started on the source appliance if it is not already running. Click Yes:-

part4-warning

In step 5, we have the choice of choosing what size appliance we’d like. This is pretty similar to the Windows based installation and helps determine JVM heap sizes are set appropriately, amongst other things. I feel quite inadequate choosing the “Tiny” option, but it is what it is and I have no requirement to go any bigger than that. Size clearly is everything in this case. Make your selection as appropriate and click Next:-

step5-size

Step 6 is choosing which datastore you’d like the appliance to go on. I’m going to choose the one with the most available space and I’m also going to thin provision it. This appliance won’t be doing a great deal in my lab and the datastore isn’t likely to fill up quickly, so I have no real need to thick provision here. Click Next:-

step6-datastore

Step 7 is the creation of some temporary network details for the new appliance as this is a migration really as opposed to an in place upgrade. In this step, we should pick the same port group as the source appliance and use the same subnet as well if possible, especially if the data migration is going to be large. My database is small and it’s a single host, so speed shouldn’t be an issue. Fill out the details appropriate to your environment and click Next:-

step7-network

Step 8 is the “Ready to complete” or final sanity check. Review the information presented and check for typos, wrong IP addresses, what size appliance you need (sometimes “Tiny” just ain’t enough!) and when you’re happy, click Finish:-

step8-ready

All being well, the process should start by initiating the appliance deployment to your ESXi host as below:-

Deploy

You can also monitor what is going on by connecting to the ESXi host with your vSphere Client and clicking on the “Events” tab of your target appliance, you should see something similar to below:-

transfer

And some while later, I was greeted with the following screen. The process took around an hour, but bear in mind this is a “tiny” and simple deployment. Larger enterprises should budget a lot more time for this process when migrating between appliances:-

finished

I double checked my “old” appliance has been powered off, and the new appliance is up with the original appliance’s identity:-

appliance-screen

Once the install is completed, you may like to perform some housekeeping before you put the appliance into production. By default, my appliance came up with the DNS name localhostI don’t really want that, so I quickly logged into the appliance console to change it. Something you may notice that’s new is that the vCenter appliance has now been set to behave just like an ESXi host, so once you press F2 to customise the system and entered the root password, the menu structure should be something that’s pretty familiar:-

appliance-menu

I like that VMware use a different colour scheme on the appliance to save any confusion with connecting with an ESXi host. Even though you can see it’s the appliance at the bottom of the screen, with many screens open it may help prevent costly mistakes! To go back to the original housekeeping, go into Configure Management Network and then DNS Configuration. Input the appropriate values for your appliance, as shown below:-

new-dns

I also like to disable IPv6, though there is a mixed bag of opinion on this. I say if you don’t use it, don’t enable it. However, this is a subjective thing and purely optional. To disable IPv6, go into Configure Management Network and then into IPv6 Configuration. To disable it, hit the space bar to uncheck the box as shown below and restart your appliance.

disable-ipv6

Once rebooted, you can see we’re up to vCenter 6.0!

about-vsphere6

Post upgrade issues

I have only really come across two issues so far – firstly I got an “Error #1009” which I cleared by deleting cookies etc from my web browser (and also upgrading the Flash Player in Firefox to the latest version).

As you can also see from the above screen shot, I was having issues with the Client Integration Plug-In. It was definitely installed from when we started the migration process, and all three browsers I had reported the issue (IE11, Firefox, Chrome) so I uninstalled the plug-in from Add/Remove Programs, rebooted, downloaded the plug-in again from the Web Client login page, installed it and as you can see below, all was good:-

integation-enabled

Conclusion

In conclusion, I’d say well done to VMware for streamlining the upgrade process for the vCenter appliance. Yes, it has a couple of quirks and yes you should ensure all pre-reqs are met, but by and large I was pretty impressed with the whole process. Next up, my ESXi host….!

 

16-02-15

Elite Implementer Status : A Few Thoughts

Cert_Roadmap_2015Q1_v5_final_WEB

 

(Image taken from vmware.com)

There is a lively thread going on over at LinkedIn regarding the new VCx 6.x tracks that I felt compelled to jot down a few thoughts on. Firstly, once the new track becomes live, the VCAP level certs will be renamed to VCIX (VMware Certified Implementation Expert) and will require two exams as before. One for administration and one for design. So far, so good. Two exams as before, presumably of similar lengths as the VCAPs now with the same core set of skills being measured. However, instead of having two certifications to your name (VCAP-DCx and VCAP-DTx), you’ll have one. Fine, I suppose it makes sense and I don’t have a problem with that.

Now comes the interesting bit – “Elite Implementer status will be granted for candidates who complete multiple VCIX certifications”. I’m glad VMware have recognised the amount of effort and skill required to complete multiple Advanced tracks, however these exams aren’t yet live (I’m guessing it will be  around VMworld time before we see them in the wild) and there are a lot of people out there whose VCAP certifications are current and have completed multiple tracks.

In my opinion, there is no reason why VMware cannot enact this change right now. It costs them nothing and provides recognition to those who have spent a minimum of around 12/14 hours sitting these tough VCAP exams and getting through them. Think about it. Yes, we’d all like to be VCDXs, but the crushing reality is often that this certification requires a level of commitment way over and above anything I’ve seen from any other certification. I simply don’t have the time and energy to commit to around 100-150 hours on putting together a design and submitting it to VMware and then defending it in front of a panel, much as I’d love to.

The VCAP exams are tough, make no mistake. Not only do you need to have “operational” experience with all the respective products, but you also need to have a good understanding of the overlapping ecosystem – such things as third party solutions, Active Directory, Group Policy, storage, networking and more. Anyone with a VCAP cert has been through the mill to get it and deserves a pat on the back. To have both design and administration certs for multiple different VMware technologies elevates you to another level still.

So in short, come on VMware, recognise your multi-track vRockstars now and give them Elite Implementer status. It’s a small gesture that will go a long way and keep existing holders motivated for when the 6.x track comes on line. For more information on the 2105 track announcements, please visit MyLearn.

Comments and opinions are welcome, maybe with enough weight we can make it happen!

 

29-01-15

VCP-DT6 – What’s New?

I noticed the other day when looking for something entirely unconnected that the latest iteration of the VCP-DT has sneaked out, somewhat under the radar. The exam is intended to test your skills around the full Horizon View stack, including Workspace, vCenter Ops for View and Mirage. AppStacks doesn’t make it in there, but that’s not a great surprise. The ink is barely wet on the acquisition paperwork, so I suppose that will form part of the VCP7 track, or whatever it gets called.

So then, what’s new? The most obvious items that leap out from the exam blueprint are Mirage and Horizon Workspace. If you’ve been hiding under a rock or EUC stuff just isn’t your thing, Mirage is a product acquired from Wanova a couple of years back which performs layered image management of physical and virtual desktops. Horizon Workspace is a web based portal that runs from a Linux appliance which can present virtual desktops, applications and such via a unified web portal.

The exam itself is 120 questions, which does sound like rather a lot but you have 120 minutes in which to answer them all, so 1 question per minute. Even I can work that one out! If English isn’t your mother tongue, then you get an extra 30 minutes.

So other than Mirage and Workspace, what else does the exam cover? As you’d expect, as View requires a vSphere stack, there are some questions relating to the install and configuration of vCenter and ESXi hosts. That’s been in there since the start, so no real surprises there. You’ll also need to know the basic building blocks of a View infrastructure, so Connection Servers and the like. I notice the blueprint makes mention of RDSH (Terminal Services in old money), which of course is new in View 6.x., so as well as firewall rules you’ll need to know how to manage RDSH. There are also objectives around creating RDSH farms and desktop and application pools for RDSH apps.

View Cloud Pod architecture is featured as objective 2.6 – this again is a new feature of View 6.x and is lovingly referred to by me alone it seems as Linked Mode for View. This is where you can have two separate View instances and tie them logically together as one for fault tolerance and high availability.

Section 5 is pretty heavy on VMware Mirage, so my inference here is that you’re going to have to know this product reasonably well if you want to pass the exam. Installation and configuration seems to be the order of the day, so know how to install and configure the major components such as Management Server and Console, Web Manager, Mirage Server and Mirage Gateway Server. If you’ve not come across Mirage before and you want a primer to get you going for the exam, I recommend giving the free VMware Hands On LabHOL-MBL-1455 – Managing Desktops with VMware Mirage“a go.

Section 6 hammers Workspace Portal. As users become more and more mobile and have expectations of a consumer type “App Store” environment, I forsee Workspace Portal becoming ever more popular. It can also serve as a single point of entry for virtual desktops, RDSH applications and ThinApps. Again, if you don’t have time to spin up a test environment and you want to get to grips with the product a bit better, try the hands on lab “HOL-MBL-1453 – VMware Workspace Portal – Explore and Deploy“.

Virtual SAN gets a mention in objective 9.2. I suppose this is more of a product awareness thing, as in it’s first iteration it has a reasonably narrow use case in my opinion, certainly in the EUC space. Again the Hands On Labs come to the rescue to give you the insight on the product basics, try “HOL-SDC-1408 – VMware Virtual SAN 101“, which I would expect to give you enough knowledge to get past any questions you might have about Virtual SAN.

Finally, objective 12.3 covers off vCenter Operations Manager for View. Yes, I know it isn’t called vCOps anymore, but there was obviously a timing issue with the exam and the rebranding of the product! V4V is basically a View specific driver that snaps into the regular vCOps product, so you’ll need to know how to do that too. Guess what? There’s a lab for that! Have a look at “HOL-MBL-1452 – Horizon View – Use Cases” to get a first hand view of what V4V is all about.

As for me, I doubt I’ll sit this exam unless I have to (maintain Partner status etc.) as I’ll wait for the VCAP updated exams. I actually prefer the VCAP exams as they focus a lot more on “doing” rather than memorising numbers of scale and things like “What version of SQL do I use for vCenter?”. I’m also busy at the moment with other vendors’ certs, so hopefully the VCAP updates are a few months off yet!

If I get some time, I’ll try and put together a short study guide for the VCP6-DT exam, so I’ll tweet about that if and when it happens.

02-12-14

Teradici University Videos

A mercifully brief blog post today – if you’re looking to get down and dirty with PCoIP (and who isn’t?), there are some excellent free training videos on Teradici’s website. You will notice that they’re a little dated (2010 in some cases) but fundamentally the ideas remain the same and you can certainly take the knowledge forward into View 6.x deployments. You can also register for free on the support site, where there are some other tools and bits and pieces, but well worth the effort. Also worth a look if you’re studying for any View exams, some of the video content lends itself to sitting the two View VCAP exams.

The videos themselves are no longer than 30 mins, which makes them bite sized enough even for my gnat like span of attention. Check out the links below for further information:-

Teradici University landing page

PCoIP Protocol Introduction

PCoIP Protocol Implementation

PCoIP Protocol On The Network

PCoIP High Level Overview

Troubleshooting PCoIP Deployments

 

28-11-14

Notes From The Trenches : Horizon View Multimedia and Graphics Tuning

In the last few weeks I’ve spent a lot of time with different customers from different verticals looking at the performance of multimedia playback and hardware graphics rendering for applications. I thought I already had a good grasp of how all of these technologies work and how best to tune them, but it’s not until you’re presented with varying requirements that it really puts your skills to the test trying to find bottlenecks and in some cases, turning what you think you knew on it’s head.

This post relates solely to VMware Horizon View, but I guess could be applied to any virtual desktop solution, whether that be XenDesktop, vWorkspace or anything else. From a View perspective, we are always hammered with the message “know your use cases”. In many ways, this message gets saturated to the degree that a touch of snow blindness kicks in and you just make some general assumptions about what will work and what won’t.

To recap, at the very least before starting a View proof of concept (PoC), you should be asking yourself the following questions :-

  • How many users will require multimedia playback?
  • What level of quality is expected? 480p? 720p?
  • Will the videos be in full screen or will a smaller window suffice?
  • What applications will require OpenGL or DirectX support?

By covering off the above points, you put yourself in a position where the proof of concept can address a high proportion of user requirements. Don’t beat yourself up though if you don’t nail it 100% out of the gate, getting it right on day one is unheard of! Taking each point in turn, let’s look at why the responses to these points matter.

How many users will require multimedia playback?

Quite often, organisations limit access to video streaming sites such as YouTube, Vimeo, iPlayer and others to preserve office bandwidth. From a View perspective, this can do you a favour but there are always uses that may well slip under that radar and result in users having a degraded VDI experience that gives the solution an unmerited bad name. News websites such as Sky News or BBC News all have multimedia content these days, from radio interviews to video clips in small boxes with a full screen option.

The result of this question will help determine an appropriate pool design. If you have users that will use a lot of video (presenters, trainers, academics, for example), then it would make sense to have a dedicated pool of desktops for these users with more resource than would be given to “regular” users. Also, VMware’s recommendation is that good quality multimedia playback requires 2 vCPUs per virtual desktop, which in turn affects your infrastructure sizing requirements as this will impact your host:desktop consolidation ratios.

If the answer to this question “all users” then the next step is to determine what quality is expected.

What level of quality is expected? 480p? 720p?

If there are users expecting full screen 1080p native performance at 60 fps (same as a blu ray player etc.) then this level of expectation should be reset. The nature of a virtual desktop solution means that this won’t be achieved due to bandwidth, compression, hardware resource and other potential bottlenecks along the way. Remember a blu ray player has a single cable running from it to the TV, so there isn’t a bunch of other traffic flying down that cable, the cable length is typically less than a metre (or 3 feet 3⅜ inches in old money!) so it’s barely even an apples to oranges comparison.

In a View environment, PCoIP is the display protocol of choice. RDP is also available, but generally lacks the flexibility and tuning options afforded to us by PCoIP. In addition to this, there are several hardware options to help augment and improve the PCoIP experience. If you recall, Teradici are the owners of PCoIP and VMware use this protocol in View and Teradici have both host card based and zero client based options to offload PCoIP processing to dedicated hardware.

One other thing worth bearing in mind is that in my experience, the human eye can’t really tell the difference between 720p and 1080p, so if you get mired in discussions with end users about this, you’re kind of missing the point. If you can deliver video at a good resolution (720p) and a good frame rate, the rest is just splitting hairs in my opinion.

Will the video be in full screen or smaller screen?

This fact obviously matters because the larger the video playback surface area, the more resource required to push it along. In the tests I’ve done, standard 480p video in a full screen uses around 20MB more GPU RAM than a video embedded into a web page (BBC News is a good example of this). If full screen high quality video playback is required, you need to factor this into pool designs and the PoC. As always though, benchmark it yourself during the PoC phase, as your mileage will inevitably vary.

What applications will require OpenGL or DirectX support?

As we ascend to the more demanding groups of users, it’s key to know who requires extra grunt on their virtual desktop. This again affects pool design but also means that some consideration will need to be given to specialist graphics hardware to enable these users to work effectively in a virtual environment as they do in the physical world. Some examples of applications requiring this level of support include but are not limited to:-

(All OpenGL apps)

  • Adobe After Effects, Photoshop CS3/CS4, Premiere Pro
  • AutoCAD
  • Google Earth
  • Google SketchUp
  • Scilab
  • Virtools

DirectX seems to be typically used in gaming, so it’s debatable whether or not there’s a use case there in general environments. That being said, one recent customer teaches computer game design and coding, which would place this use case right in the middle of your View deployment.

What are my options?

There is a very good white paper from VMware that discusses graphics acceleration in View and is well worth a read. It focusses primarily on the NVIDIA GRID solution which I’ve discussed previously and is available here. So let’s say you’ve done your scoping exercise and completed a desktop assessment, where do you go from here? There are lots of resources out there if your Google-Fu is up to snuff that will tell you what the different varieties of hardware there are, I’m just going to provide a simple reference based on my testing.

Multimedia Playback

For good multimedia performance, look for a thin or zero client with the Teradici chip installed. I recently tested the 10Zig V1200P Zero Client with a customer and I have to say the multimedia performance was exceptional. This was partly due to the excellent bandwidth to desktop that the customer had, but also with the Tera2 chip installed, the PCoIP processing gets offloaded to a dedicated hardware device, in a similar way to say a TCP/IP Offload Engine works on a network card. Of course it goes without saying that performance with hardware vs software will always be superior and this was the case in my experience.

If you can’t stretch that far budget wise, ensure you follow best practice and have 2 vCPU per virtual desktop as video playback is a purely CPU based operation. If you run up Task Manager on two desktops, one uniprocessor and one multiprocessor, you should see the difference in CPU spiking during multimedia playback. Remember though not to oversubscribe vCPU – there is not a linear improvement the more vCPUs you add to a desktop, in actual fact you could slow the whole environment down. This is well known in the virtualisation community and there is a good explanation as to why here. So in short? No more than 2 vCPUs per desktop unless a specific use case calls for it, which would be dependent on multiprocessor desktop applications.

Applications requiring OpenGL or DirectX support

Primarily in the case of OpenGL, I have seen improved performance using NVIDIA GRID K1 cards operating in vSGA mode. As stated in a previous post, make sure you have the latest versions of ESXi, Horizon View and the NVIDIA VIB on the ESXi host. So how much better is hardware acceleration? A picture says a thousand words, so I’ve pictured below the output from Passmark 3D test that shows a desktop using software rendering and 128 MB video RAM and a desktop using hardware rendering with 128MB video RAM.

IMG_20141120_134042929 IMG_20141120_134035087On the same two desktops I’ve also got the 2D Passmark test results shown below:-

IMG_20141120_133950443 IMG_20141120_134005415

Apologies for the general rubbishness of the pictures, they were taken with a phone camera, but it gives you a sense as to what kind of performance you can expect with hardware acceleration. Thankfully Passmark is a free tool and can be downloaded from here.

How much Video RAM should I allocate to my pool?

Great question and somewhat subjective. VMware state that if you are using Windows 7 with Aero (and let’s be honest, most people are) that you should set a value between 64MB – 128MB in the pool settings (I couldn’t find any change in this advice for View 6). In my experience, this is fine for basic use cases, but where multimedia is required and especially good quality video playback, that won’t be enough. We talk all the time about right sizing View deployments, but one gap really in the analysis for me is to how to right size video RAM per pool.

I found a free utility called GPU-Z which is really useful for benchmarking performance during multimedia playback and determining the high watermark for video RAM usage. This can then be taken forward into the View design and ensure multimedia users have enough resource for their use case. GPU-Z can be downloaded here and is pretty simple to use. For my testing purposes, I ran it up during normal Windows navigation (starting and closing apps, web pages etc.)

gpuz-1

The above screenshot shows the tool running on a laptop and gives you the kind of idea of what information can be gleaned. The Sensors tab is the one with all the key information, the screen shot below shows the “idle” state of the GPU during normal operations.

gpuz-2

As you can see above, we’re already at 206MB of video RAM and we’re not really doing anything. If your pool is set to the recommended 128MB RAM, there is an obvious pinch point there already. This will lead to degraded multimedia performance as there isn’t sufficient resource. Playing a small screen video from the BBC News website, the video RAM usage climbs to 284MB as shown below.

gpuz-3-video-sm

And then finally on the full screen version of the video, the usage climbs still further to 363MB. So in this particular case, you’d be looking to set an initial high pool watermark of 370MB video RAM (for example) to give the user sufficient horse power for video playback in View. That being said, the results will most likely be lower in a virtual environment, so make sure you continue to monitor usage during the PoC phase to ensure that the pool video RAM size is neither under or over specced.

gpuz-4-video-lg

Conclusions

In summary, I’d say look at the following:-

  • Review end user requirements for graphics performance, including multimedia and application support
  • Spend time tuning PCoIP if you find that bandwidth is a constraint, but remember this is usually a balance and/or trade off between picture quality and playback smoothness. Audio quality is also affected by tuning maximum audio bandwidth
  • Conduct a PoC to level expectation appropriately
  • Use free tools such as Passmark and GPU-Z to accurately benchmark the environment and right size the capacity
  • Obtain eval units if you are going down the thin/zero client route and test out all the different use cases you know of to see which unit is most appropriate
  • vSGA with NVIDIA GRID cards can be very cost effective when applications require additional 3D resource
  • Consider the use of a Teradici APEX 2800 card to offload some graphics processing to dedicated hardware in the host (caveat : I haven’t tested this and Teradici – feel free to loan me one!)

18-11-14

download

UK VMUG Event Review

Yesterday I had the pleasure of attending the fourth UK-VMUG annual conference at the National Motorcycle Museum in Solihull. For those that didn’t make it, I’ve put together an event review for your viewing pleasure. Apologies for the crapness of the pictures, taken with my phone unfortunately!

 

Joe Baguley Keynote

IMG_20141118_091319

After a brief introduction from VMUG leader Alaric Davies, the day started with the now usual keynote from Joe Baguley, CTO for EMEA. This year the keynote was entitled “Rant as a Service” and after setting the scene for around 30 minutes, the key message is still around software defined enterprise. It was my interpretation that there was a small pop at a hyper converged company whose name may or may not contain nuts on the basis that EVO:Rail and EVO:Rack can give you the same level of support and performance without having to buy into a single vendor. I’ve been feeling for a while that there isn’t a lot of love lost between the two parties, and I don’t know if that’s true, but I don’t find it particularly helpful when constant implied barbs are being traded. Just my opinion!

The point of EVO:Rail is to have the infrastructure up and running within 15 minutes. The value here is that you can go to 8 partners and pick which stack and value add you want. It’s not a single vendor lock in as such, as most customers already have an existing relationship with the likes of Dell, etc. EVO:Rail is 2U in size and has four blades installed. Not dissimilar to Nutanix and UCS in that respect, though of course the UCS chassis has a larger form factor. For larger installations or special use cases such as VDI with NVIDIA graphics, the bigger EVO:Rack will be required.

One interesting line was the ongoing idea now that abstraction and obfuscation takes place in as much as key components such as disks and raid controllers are being replaced by software and public and hybrid cloud solutions. This of course is becoming transparent to the “end user” as we move towards a hybrid cloud type of world. If a disk controller fails, it’s OK, software can take care of that. Lose a data centre? That’s OK too, we’ll just move to another one in the background. I’m not sure we’re totally there with that one yet, but it’s an interesting concept none the less.

Then we had the discussion about what is “Enterprise Scale” these days? As consumer electronics demands increase exponentially (photo uploads, data requirements, data production, etc) then most things these days really are “Enterprise” grade as they have millions of people using them daily, not just the tens, hundreds or even thousands in an “enterprise” environment.

White boxes are also now are taking the place of large monolithic proprietary solutions. EVO:Rail again was mentioned as an example of this, where you get a pre-built, predictable and validated vSphere environment from whichever hardware vendor you prefer. The irony that you’re still locked into VMware technologies was missed at this point, but I think I see where the point is here.

IMG_20141118_091425

VMworld Update – Julian Wood

I then went to the VMworld Update session with Julian Wood. One thing I’d have to say is that there was too much to fit in for 40 minutes. That’s not Julian’s fault – as he noted, once you look into it, there are so many product releases, updates and acquisitions to keep track of that you could spend all day talking about it! There was some discussion around the vRealise suite (I’m not spelling it with a “z”!),  what that means and how there is on and off premises solutions for that now. vRealise is essentially the management and automation tools bundled into a suite, so products such as vCloud Automation Center, Log Insight, vCenter Operations Manager etc.

CloudVolumes was also discussed, where applications are installed to a VMDK and then this VMDK is connected and presented to desktops in a fraction of the time it takes to do ThinApp etc. As I was listening to this, I started to think “what are the storage requirements though?  Read intensive, or are blocks cached.  How does this work?”. “Do we require any back end infrastructure such as MS-SQL etc.?”

On the EUC side, big strides continue to be made and VMware are really competing with Citrix in the application presentation stakes, as well as adding further improvements to the core View product, including Cloud Pods (or Linked Mode for View, as I like to call it!), where you can break the current scalability limits but also provide an additional site failover for virtual desktops if required, plugging one gap in the previous product set.

vSphere Futures – Duncan Epping

The next session was with Duncan Epping. His sessions are always well attended as he’s usually on the bleeding edge of what the company is doing internally, plus I’ve found him to be pretty honest in his responses to some issues that have cropped up, especially around Virtual SAN. I made quite a few notes around what was discussed, and it’s probably easier to break them down into bullet points:-

  • All flash Virtual SAN coming, to increase the configuration options for two slot blades, where currently you need flash for cache and spinning disk for content
  • Virtual volumes (VVols) policies coming that will based per VM
  • This functionality will be based on an array that supports virtual volumes
  • IO filters directly in the hypervisor for those arrays not VVol aware
  • Storage DRS VM IOPS reservations, so we can migrate workloads to other storage if reservations are not met
  • Storage DRS has better awareness of thin provisioning, dedupe and replication
  • Resource and Availability Service is a new web based tool that uses exported DRS settings to simulate failure of resources and ensure design is correct, validating such things as Admission Control settings
  • FT support for up to 4 vCPU
  • No more vLockstep or shared VMDK for Fault Tolerance,  10Gbps networking will be required
  • The ability to vMotion “anywhere”, requirement is that both vCenters must be in same SSO domain
  • vMotion has a 10ms latency tolerance now, working on 100ms tolerance for long distances
  • The vCenter Appliance will scale as well as Windows version now, and will be the future of vCenter releases
  • SQL server supported externally for the vCenter Appliance
  • Task pane will be coming into bottom of Web Client
  • Less nested right click options to make the Web Client interface cleaner
  • Task  concurrency, performance  charts and other features will be introduced into the Web Client
  • Linked Mode will be available for the vCenter Appliance
  • Content library for ISOs etc, replicated across sites. Also includes templates, OVFs etc. Same as Hyper-V Libraries, by the sounds of it

IMG_20141118_105540

One very interesting thread was around Project Fargo. This in essence is a “re-imagining” of the snapshot process and will allow for the creation of Windows virtual machines in around 2 seconds. In the lab, Linux VMs were spun up in less than that, the overhead on the Windows side was mainly down to customisation and joining AD etc. Another way of thinking about it is “Linked Clones on steroids” in the sense that you have a parent virtual machine and lots of child virtual machines. Duncan’s blog entry as linked above goes into some good detail on what you can expect from this initiative.

Horizon View Architecture and Design – Barry Coombs & Peter Von Oven

I then went to the session by Barry Coombs and Peter Von Oven about Horizon View Design and Architecture. This session wasn’t really a “death by PowerPoint” session, but more a key points and brief discussion at a high level as to what you should be looking for in a good Horizon View design. There are always little nuggets or anecdotes that can be useful that maybe you haven’t come across before that only really come out of experience. One good point from this session was that you should never let the IT team speak on behalf of the end users, so in other words, don’t assume IT know necessarily what the user experience is like, because they can’t know every individual use case.

The key point of performing a desktop assessment phase and also a proof of concept was also re-iterated, and I can’t agree with this enough. To chat to IT and some end users is not enough. It’s useful as part of the whole engagement, but you also need key performance metrics and also a proof of concept to see what works and what doesn’t work. Think of a PoC as the first draft of a document that requires lots of iterations to get it “just right”. To perform a desktop assessment and some stakeholder interviews and then think you can roll an effective VDI environment first time out of the gate is total fantasy.

Any VDI deployment (whether it’s View or AN Other solution) should be an improvement on the current “physical” end user experience. Again this is a given. If you’re spending time and money replacing a solution people are familiar with and comfortable with, it needs to be visually an improvement on what they already have, or the solution will simply acquire a “bad name”. One interesting idea was the notion of having a “Departmental Champion” – an end user who wants to positively influence the outcome of the project. They can interface with other users and help cascade information and feedback backwards and forwards. This can give you a view inside the PoC that you would not normally have.

Some other brief points included not forgetting to factor in VM and graphics overhead when right sizing a solution, these are commonly forgotten about (guilty!) and user concurrency should be measured in advance. Generally I use the rule of thumb of 80% concurrency, but in an organisation that has shift patterns, this may not be appropriate. Make sure the solution scale!

IMG_20141118_131653

EUC Update – Peter Von Oven

My next session was another EUC session, this time with Peter Von Oven from VMware. Again, a lot of key messages came out pretty thick and fast, so a bullet point summary is included below:-

  • VMware’s strategy is still the three pillar strategy of SDDC,  EUC and Hybrid Cloud
  • AppVolumes (formerly known as CloudVolumes) will be available in December
  • Horizon Workspace can disable icons based on physical location. It’s context aware in that sense. So for example, R&D portal is not accessible from Starbucks, but is from a corporate LAN
  • Horizon Workspace provides a central point of management
  • AppVolumes will be in the Enterprise Edition of Horizon View
  • View 6 makes it possible to co-exist and transition from XenApp environments
  • Windows 2008 or 2012 server required for RDSH Application Publishing, and can mix and match if required
  • Easier than upgrade to XenApp 7.5 in the sense that a new infrastructure does not need to be stood up
  • Seamless application remoting,  even on Mac
  • Use vCOps for View and do a 60 day assessment of your environment – though I’m not sure you get the same level of information as you do with say Stratusphere FIT
  • Use thin clients not zero for unified communications in VDI
  • Fully supported by Microsoft for Lync over PCoIP
  • Webcam and mic done using USB redirection
  • Use case for Thinapp is portability and isolation, AppVolumes for performance
  • Application catalogue allows user self service of applications, can remove after 30 days etc
  • Workspace Suite is Horizon + AirWatch, includes Horizon Advanced for Workspace
  • vGPU like Citrix,  coming Q1 next year – vGPU is covered here and is essentially dedicated hardware VGA acceleration but with the consolidation ratio of sVGA. Still uses NVIDIA driver for application validation and support
  • Horizon Flex out in December, delivers containerised desktops in much the same way as the old VMware ACE product
  • No dependency for Horizon Flex on Mirage at the back end
  • Requires Flex policy management server and provides time limits, grace period, remote lock and wipe, USB lock down, etc

IMG_20141118_140229

Cisco and VMware – Chris Bashforth

For my final breakout of the day, I went to the Cisco partner presentation on UCS and VMware View. I have to say I didn’t find this session all that useful. I don’t know if it was due to the graveyard slot at the end of a long day or if it was just the general dryness of the topic, but I never really felt like the audience engaged with the speaker and the atmosphere fell a little flat. We were given a brief overview of UCS for those who have never seen it before and then a quick run through of the blade and chassis models available and which are recommended for VDI deployments.

I’m still quite new to UCS having been a HP guy all of my career, so there were some interesting items in there but I didn’t feel I got a lot out of this session and left a little disappointed. For those folks wanting to use NVIDIA GRID cards in their UCS deployments, you will need to use C class rackmount servers for this purpose, with two slots available per server for this purpose. B class blades are densely packed and simply do not have the space to accommodate this card.

One thing to correct is the speaker’s comment that NVIDIA vDGA will support 8 users per server – this isn’t true. Direct passthrough means that you connect the physical VGA card to the virtual desktop on a 1:1 basis. I can only assume he got mixed up with the upcoming vGPU which will be a similar passthrough arrangement, but with the ability to get a higher consolidation ratio of up to 8. If I misinterpreted these comments, please feel free to let me know.

IMG_20141118_154030

Closing Keynote – Chris Wahl

The closing keynote was from Chris Wahl, industry legend and double VCDX. The force is strong with this one! The session was entitled “Don’t Be a Minesweeper”. I went into the session wondering what the correlation was between stealing bits of beer from tables (my definition of a Minesweeper) and the IT industry, but it turns out he was referring to the cheesy clicky clicky game of previous Windows’ vintages. The general gist was that automation is the way forward, we’re seeing that now, and it pays dividends to be ahead of the curve by learning some scripting now. Whether that be PowerShell, PowerCLI, Python or anything else.

I did particularly enjoy Chris’s attempt at using British slang. Top marks to him for differentiating between bollocks (bad) and dog’s bollocks (very good). It’s not always easy for an American to grasp such as concept depending on whether or not said objects are canine connected, but I think he did pretty well!

IMG_20141118_164222

Summary

Overall it was a very good day and a hearty well done to the VMUG committee who put it all together. This was my third VMUG-UK and each time it just keeps getting bigger. I don’t know how many showed yesterday, but I heard on the Twittervine that nearly 600 had pre-registered, which is absolutely fantastic. I did wonder if the event is now starting to outgrow the venue – the solutions hall was packed and difficult to navigate and lunch and brew breaks got quite cramped for space, but that’s a relatively minor thing.

I didn’t get much chance to look at the booths in the solutions hall, but it’s difficult when you’re a partner with long standing relationships with vendors to have something new to talk about sometimes. I did however get to see some old ex-colleagues as well as chatting to some folks I hadn’t seen in years, which was great.

 

08-11-14

Adventures in NVIDIA GRID K1 and Horizon View

Just had a really interesting week helping a customer with a proof of concept environment using Horizon View 6 and an NVIDIA GRID K1 card, so I thought I’d blog about it. You know, like you do. Anyway, this customer already had a View 5.2 environment running on vSphere 5.1 on a Cisco UCS back end. They use 10 Zig V-1200 zero clients on the desktop to provide virtual desktops to end users. As they are an education customer providing tuition on 3D rendering and computer games production, they wanted to see how far they could push a K1 card and get acceptable performance for end users. Ultimately, the customer’s goal is to move away from fat clients as much as possible and move over to thin or zero clients.

I have to say that in my opinion,  there is not a great deal of content out there about K1 cards and Horizon View. One of the best articles I’ve seen is by my good friend Steve Dunne, where he conducted a PoC with a customer using dedicated VGA. In our case, we were testing out sVGA or Shared VGA as the TCO of dedicated would be far too high and impossible to justify to the cheque signers. This PoC involved a Cisco UCS C240-M3 with an NVIDIA K1 card pre-installed. The K1 card has four GPU cores and 16GB RAM on board and takes up two PCI slots.

I’m not going to produce a highly structured and technical test plan with performance metrics as this really isn’t the way we ran the initial testing. It was really a bit more ad hoc than that. We ran the following basic tests:-

– Official 720p “Furious 7” trailer from YouTube (replete with hopelessly unrealistic stunts)

– Official 1080p “Fury” trailer from YouTube  (replete with hopelessly unrealistic acting)

– Manipulating a pre-assembled dune buggy type unit in LEGO Digital Designer

– Manipulating objects in 3DS Max

When we started off at View 5.2, we found that video playback was very choppy, lip sync was out on the trailers and 3D objects would just hang on the screen. Not the most auspicious of starts, so I ensured the NVIDIA VIB for ESXi 5.1 was the latest version (it was) and ESXi was at 5.1 U1. We made a single change at a time and went back over the test list above to see what difference it made. We initially set the test pool to be hardware 3D renderer, with 256MB of video RAM. The command gpuvm was checked to ensure the test VMs were indeed assigned to the GRID K1 card and we used nvidia-smi -l to monitor the GPU usage during the testing process.

Remember that when you configure video RAM in a desktop pool, half of the RAM is assigned from host memory and the other half is from the GRID K1 card, so keep this in mind when sizing your environment. The goal of the customer is to provide enough capacity for 50 concurrent heavy 3D users per host, so two GRID K1 cards per host is what they’re looking at, pending testing and based on the K120Q profile.

So after performing some Google-Fu, the next change we decided to make was to add the Teradici audio driver to the base image, the main reason for this was that there is apparently a known issue with 10Zig devices and audio sync, so we thought we’d give it a try. Although audio quality and sync did improve, it still didn’t really give us the results we were looking for.

Having gone back to the drawing board (and forums), the next change we made was to upgrade the View agent on the virtual desktop from 5.2 to 5.3. Some customers in the VMware Communities forums had observed some fairly major performance improvements doing this, without the need to upgrade the rest of the View infrastructure to 5.3. We did this and boy did things improve! It was like night and day and the video was much improved, barely flickered at all and the audio was perfect. At this point we decided that View 5.2 was obviously not going to give us the performance we needed to put this environment in front of power users for UAT.

The decision was taken to upgrade the identical but unused parallel environment at another site from vSphere 5.1 and View 5.2 to vSphere 5.5 U2 and View 6.0.1. The reasoning behind this was that I knew that PCoIP had improved markedly in 6.0.1 from the 5.x days, plus is meant we were testing on the latest platform available. Once we upgraded the environment and re-ran the tests, we saw further improvement without major pool or image changes. We updated VMware Tools and also the View Agent to the latest versions as part of the upgrade and the customer was really impressed with the results.

In fact, as we were watching the 720p and 1080p videos, we remarked that you’d never know you were watching it on a zero client, basically streamed from a data centre. That remark is quite telling, as if a bunch of grizzled techies say that, end users are likely to be even more chirpy! We also performed more rudimentary testing with LEGO Digital Designer and also 3DS Max, with improved results. The PoC kit has now been left with the customer, as really we need subject matter experts to test whether or not this solution provides acceptable end user performance to totally replace fat clients.

What is the takeway from this?

The takeaway is that  in my opinion, VDI is following a similar path to what datacentre virtualisation followed a few years back. First you take the quick and easy wins such as web servers and file servers and then you get more ambitious and virtualise database servers and messaging servers once confidence in the platform has been established.

VDI has started with the lighter use of the “knowledge user” who uses a web browser and some Office applications to a basic level. Now this target has been proven and conquered, we’re moving up the stack to concentrate on users who require more grunt out of their virtual desktop. The improvements in Horizon View, ESXi and now with the support of hardware vendors such as NVIDIA and Teradici, native levels of performance for some heavy use cases can be achieved at a sensible cost.

That being said, running a PoC will also help finding where the performance tipping point is and what realistic expectations are. In our case, early testing has shown that video, audio and smaller scale 3D object manipulation is more than feasible and a realistic goal for production. However, much larger vehicles such as the Unreal Development Kit may still be best suited to dedicated gaming hardware rather than a virtual desktop environment. The one thing of course I haven’t mentioned is the fact that the customer is GigE to the desktop and have a 10GigE backbone between their two sites. This makes a huge difference. I doubt we’d have seen the same results on a 100Mbps/1Gbps equivalent environment.

The customer will be testing the PoC until the end of December, hopefully I can share the results nearer the time as to whether or not they proceed and if so, how they do it. Hopefully for anyone researching or testing NVIDIA GRID cards in a VDI environment, this has been some help.

NVIDIA GRID Card

 

17-10-14

VMware Flings are a curious concept. Not quite production code, yet not quite hobby code either. However, it’s a veritable goldmine of useful tools and applications to complement, manage and augment your VMware estate. For example, Auto Deploy is a very useful feature of vSphere, but I don’t see a lot of it in the wild as most folks think it’s a little too “command line” to be worth the effort. Having to do most of the work in PowerShell puts a lot of people off (and I have to say, I don’t blame them. I winced when it came up on my VCAP-DCA exam). So what has someone done? Written a GUI tool for it. A solution that fixed a problem, not a solution looking for a problem.

I’m not here to discuss that one, however. I’m primarily an EUC boy and there are several useful (did I say free?) tools on VMware Flings that can bail out your average Horizon View Joe. One of which is ViewDbChk. Yes, I know it doesn’t have a fancy name, but it’s a true bacon saver in the sense that I found myself saying “Wow, ViewDbChk really saved my bacon there”. As View folks will know, data is splattered across various different repositories, including a database for View Events, a database for View Composer, a database for vCenter and the ADAM/LDS database, replicated between the Connection Servers.

This is where ViewDBChk comes into play. On occasion, what is seen in vCenter and what is shown in View Administrator can get a little out of whack, and this can slow you down or cause you unforeseen issues. For example, if you’re plain impatient like me, you may find that a pool deletion has apparently hung, so you help it along by manually torching some VMs and replica VMs from vCenter (in the lab, obviously. Never in the real world!). At this point the delete task can get stalled because there are now no objects for View to refer to, or things could be the other way around where View has left orphaned VMs behind in vCenter and you just need to perform a little housekeeping. ViewDbChk is really useful in performing this clean sweep and keeping your vCenter and View installations in perfect sync.

To use the tool, first download it from the Flings website, unzip and copy the files over to any Connection Server in your View installation you want to tidy up. Then you need to start a command prompt as ViewDBChk is a command line tool, ensuring you right click and use the “Run As Administrator” option, otherwise you’ll get some fairly cryptic errors and this caught me out at first. From your Administrator command prompt, run ViewDbChk –scanMachines. This will run through and check your View estate for issues, as shown below (desktop/pool names redacted!) :-

 

scan1

 

From this point on it’s really just a case of following the bouncing ball. One word to note is that for me, just typing “y” or “n” for the questions did not seem to work properly, so I just typed in “yes” and “no”. Obviously for the tool to work it’s magic, you kind of have to let it do it’s thing 😉

Once some duff VMs have been found, you are asked to disable the pool affected. Then as shown below, the affected VMs are listed with the reason for the ongoing issue, in this case the master VM or snapshot couldn’t be found in vCenter (ooops!). At this point the tool will ask you if you want it to tidy that up. Say yes and the tool will do the rest. Once all affected VMs have been cleaned up, you are then prompted to re-enable the pool, as per the screen shot below.

scan2

This is all well and good for a couple of VMs, but what if your View environment is in a stinky old state? Well you can add switches to the tool to force through a tidy up without having to constantly say “yes”. Also worth bearing in mind is the in built limit of 5 VMs before you have to run the tool again. To circumvent this, use the command ViewDBChk –scanMachines –force –limit 100 to set an upper limit of 100 VMs (or whatever you deem appropriate).

Props to Griff James for this cool tool, and happy flinging!!

 

15-08-14

VCAP-DTA Consolidated Study Guide 1.5 Released

Now that I have finally passed, I’ve been back over the Consolidated VCAP-DTA Study Guide and updated it. I’ve done some small formatting changes so it’s a little easier to read, as well as correcting a few typos I found and adding in the two quick reference tables for PCoIP and Windows image tuning that I blogged about previously. I’ve also added in a few exam tips for those thinking of sitting soon. As this road has now come to a bit of an end, I won’t be maintaining this guide from here on in until the VCAP6-DTA is released, which I expect to be a little way off just yet.

I’ll also update the sample questions guide, but that may follow in a week or so.

Enjoy!