03-04-14

vExpert 2014 Announcement

 

VMware-vExpert-2014-400x57

 

So Tuesday saw the announcement of the 2014 list of vExperts and I’m delighted to say that I made the cut this year (after checking of course it wasn’t an April Fool!). Actually, it’s the first time I’ve applied and looking down the list, it’s a “who’s who” of vRockstars from around the globe, including around a dozen or so of my ex-colleagues at Xtravirt  who continue to add a lot of value to the community.

A big thanks of course go to the team who make vExpert possible, getting through 700+ applications in a month can’t have been all that easy! Thanks too to Jason Gaudreau, our TAM at VMware, who suggested I should go for it in the first place. When I look back at the last year, I’ve done a lot – 3 VCAPs, a load of blog content, study guides, plus the work I’ve done with VMware PSO and the account management team since I’ve been at MMC.

You’d think that I might sit back now and rest on my laurels, but if anything, it’s actually making me want to do more. I’ve already offered to present at our local VMUG, I’m blogging as often as I can and there will be more VCAPs this year I’m sure, as I start on the vCloud path once I’ve got NetApp, VCAP-DTA and Hyper-V out of the way!

Looking forward now to getting started and continuing to spread the gospel of virtualisation. Congratulations to all 2014 vExperts both new and returning and thanks for making the community awesome!

 

02-04-14

VCAP-DTA – Objective 5.2 – Deploy ThinApp Applications using Active Directory

Once we have a repository configured for our ThinApps, we next continue the groundwork by preparing Active Directory. We can then harness Active Directory groups to control access to the ThinApps.

  • Create an Active Directory OU for ThinApp packages or groups – From your domain server, go to Administrative Tools and select Active Directory Users and Groups. From wherever in the hierarchy the exam asks you to, right click and select New, Organizational Unit. Give the OU a name and click OK.
  • Add users to individual ThinApp package OU or groups – Again not really a View skill as such, just some basic AD administration. Now you created your OU(s) as above, to create a user right click on the ThinApp OU, click New, User, fill out the appropriate details, click Next, enter password information and click Next and Finish. To add a group, right click on the appropriate OU, click New, Group, give the group a name and select the type and click OK. To add users to an existing group, double click the group, click Members, Add and enter the user names and click Check Names. Click OK twice.
  • Leverage AD GPOs for individual ThinApp MSIs – Group Policy can be used to publish an existing ThinApp MSI without the need for a repository, or in parallel. To configure this, go to Administrative Tools, Group Policy Management. Right click the OU in which you would like to create the GPO. Select Create a GPO in this domain, and link it here (for a new GPO, or select Link an existing GPO if asked).Name the GPO and click OK. Once the GPO is created, right click on it and select Edit. In either Computer Configuration or User Configuration select Policies and then Software Settings. Right click on Software Installation and select New, Package. Browse to the network location of the MSI and select the MSI and then Open. Accept the defaults to Assign the package to a user or computer or click Advanced for further settings. Click OK. If you select Advanced, use the tabs across the top to make changes as appropriate and click OK. You may need to run gpupdate.exe to refresh Group Policy.
  • Create and maintain a ThinApp login script – The ThinReg utility can be used in an existing login script to deploy ThinApps to users. For example, in the NETLOGON share, you can add a line or lines into the logon script to invoke thinreg.exe. In it’s simplest form, just add the line thinreg.exe \\server\share\application.exe /Q. The /Q switch just runs the command silently. It may well crop up as a specific requirement on the exam.

01-04-14

VCAP-DTA – Objective 5.1 – Create a ThinApp Repository

There are two objectives in this section which are around setting up the ThinApp repository on the network to be used by the View infrastructure to distribute applications from. It’s telling that this topic has several tools references to it, so we’re going outside the confines of the View Administration guide really for the first time.

Again it’s difficult to imagine within the confines of a tight three hour exam that you will be asked to package up anything other than a relatively simple application, but be prepared for the odd curve ball. Ultimately as long as you understand the fundamentals, you can go a long way to scoring points on this objective, even if you don’t get it completely right.

  • Create and configure a ThinApp repository – The creation of the ThinApp repository is done from within View Administrator. Go to View Configuration, ThinApp Configuration, Add Repository then enter in a Display Name and Share Path (e.g. \\server\thinapp\repo) and add a Description if you like.
     
  • Configure a ThinApp repository for fault tolerance using DFS or similar tools – In order to create a DFS share, you need to have the File Services role enabled on the server. DFS is basically a network share made up of chunks of storage from different servers. You reach the DFS share by using the path \\domain\\dfsroot, so for example \\beckett.local\dfs-share. DFS also has file replication technology built in you can use for further resilience. I can’t really think you’ll be asked to do too much with DFS in the exam as much of this is based on the Windows server itself. What you will probably need to know is how to point a ThinApp repository at a DFS share (so use the example syntax above). This is pretty much all that is listed in the ThinApp reference materials.

27-03-14

North West England VMUG Meeting Review – 26th March

I had the pleasure of yesterday attending the latest North West VMUG meeting at the Crowne Plaza hotel in Manchester. As usual, the event was a half day event but this time with the added extra of some free training in the morning provided by community stalwart Mike Laverick. I didn’t attend this myself, but I’m sure it was very well recieved by those that did attend.

Owing to the late withdrawal of local community hero Ricky El-Qasem, there was a slight rejig to the schedule. Dell basically provided a “twofer” session, showing off their DVS solution stack and also the new VRTX (pronounced “Vertex”) all in one server stack in a single 5U unit. We then had a session from local cloud providers 1st Easy and to round the day off, we had an interesting session from Mike Laverick around the concept of “FeedForward”.

So Dell kicked off with Simon Isherwood discussing their DVS model, and I was immediately wishing they’d call it something else as a DVS is something totally different to me – a Distributed Virtual Switch! Such is life in the IT industry that many acronyms overlap, so we just have to live with it. Not Simon’s fault, I’m sure. The purpose of the DVS is that it provides a reference architecture for deploying not just Horizon View, but Citrix XenDesktop and other solutions atop Dell hardware and services.

As many will be aware, Dell have been on a bit of an acquisition spree in the last few years, notably picking up Quest and also Wyse in that time.  That’s significant because Quest have vWorkspace, which is also a brokered VDI solution. Wyse is significant as you could argue it’s the “de facto” choice for thin and zero client solutions in a VDI deployment.

As always there were a raft of facts and figures, but some of the more telling stats were that it has been forecast that by 2016 there will be 200 million employees taking part in BYOD initiatives and Dell have noticed anecdotally that there are many more clients coming forward now looking to do something in the VDI space.

What was good to hear was that Dell are as agnostic as possible in their stack, so obviously they would prefer you to go down the all Dell route of Dell servers, professional services, networking and storage, but where brownfield sites have existing arrangements for any of the previous items, Dell can work within these boundaries to design and implement a VDI solution. The DVS model provides white papers on compatibility and scalability testing, to remove those time consuming steps from a VDI deployment project and give you some confidence on what sort of scales you can achieve.

There were other discussion items around the use of nVidia Grid and Lynx cards to provide high end graphics for VDI solutions but the thing that probably turned heads the most was the Cloud Connect stick. This is basically a stick not much bigger than a regular USB stick that has an MHL port, USB on the go support and a slot for additional SD storage. What you do is basically plug the stick into a HDMI socket (and you can loop the USB on the go cable for powered support), attach a bluetooth mouse and keyboard and it essentially becomes a thin client. The stick is around £130 and is an Android device with View, XenDesktop and Google Play support. Dell have rubbed some awesome sauce on this device!

All the thin/zero devices are managed via Cloud Client Manager, which is a web based service that provides MDM services such as device wipe, firmware updates etc. As a matter of fact, you cannot use a Cloud Connect stick unless it has access to Cloud Client Manager, according to Dell. Well worth checking out if you get the chance.

We then had a quick run through the development of the VRTX platform. It seems the main driver for the design of this solution was smaller businesses or branch offices where the server room was generally a cupboard with random bits of hardware, some four gangs stretched across the room and sone strategically placed desk fans. The purpose of VRTX is to take all of these components and shrink them down into a 5U form factor chassis. It can be rack mounted or free standing and takes up to 4 half height blade servers or 2 full height blades. It also has internal DAS storage and comes with a variety of options around configuration choices.

One feature Dell was particularly keen to emphasise was the volume of the chassis itself. Usually you would expect enterprise grade server platforms to sound like a plane taking off, and that’s usually the case, but the VRTX itself has been designed to be whisper quiet for a small office setting, so theoretically you could have it powered on in an open plan office and nobody would ever know. Dell switched it on during the presentation and I can verify it was indeed a very quiet piece of kit!

For large scale geographical deployments, there is a web based management tool with a management map so administrators can drill down and manage VRTX devices. A proof point for the solution is Caterham F1, who have consolidated their track side kit down from several flight cases down to just a few VRTX devices.

Two sneaky pictures of a powered on VRTX unit!
Two sneaky pictures of a powered on VRTX unit!

IMG_20140326_152439

 

Then came Stephen Bell, the MD of local cloud provider 1st Easy. This presentation was slightly more abstract with the title “From waterwheels to cloud”. The premise of this presentation was that during the industrial revolution, choices were made around how power was generated and the waterwheel was a fixed solution that had inherent flaws. This then lead on to the discussion on energy costs, which these days seem to be the primary driver for virtualisation.

I seem to recall Stephen said their energy costs had gone up three fold in eight years, and that trend is only set to rise. As such, they made the strategic decision to consolidate servers into VMware technologies such as vSphere and vCloud Director, to allow them to provide the same level of service but at a much smaller footprint and therefore cost. Also, as opposed to the concept of a waterwheel being a fixed and rigid design model, virtualisation and cloud had allowed them to become more agile as a service provider, and this was a key business driver from the word go.

The final main presentation was from Mike Laverick, discussing the concept of “FeedForward”. He started the session by discussing how user groups tend to be dominated by vendors, mainly because attendees fear presenting themselves. This can be for a variety of reasons, for example :-

  • “I only have a few hosts”
  • “Nobody is interested in my small project”
  • “My project failed, who wants to hear about that?”
  • “I’m boring!”
  • “I’m not confident enough to present in front of an audience”

A few years back, I was part of the Novell community in the UK and Europe and we had similar problems trying to get customers to present to the UG. The fact is, when a customer presents, it re-invigorates the audience. Instead of the same old faces and voices, and presentations about similar storage solutions for example, you get some “real world” insight into what worked, what didn’t worked, what we learned etc.

The drive now is to try and engage VMUG members to present more frequently by employing the “FeedForward” mechanism. In essence, what this is is a mentoring system, whereby a senior member of the community will help you design and present your slide deck, offer guidance on what works and what maybe doesn’t and even maybe stand up with you when you do it.

The naming as it suggests means you get constructive dialog going before you present rather than after, so it’s not feedback as such. So when you come to the big day and you present to your local VMUG, you can have confidence that what you’re presenting is interesting, factually correct and has been proof read by a different pair of eyes.

So for my sins I volunteered to present at the next meeting on June 11th, I’m thinking about discussing VMware certification. I’ve done a bagful of VCPs and VCAPs, so it seems like something I can talk about for 45 minutes!

To round things off, we had the usual vNews update from Ashley Davies. This covered topics such as vSAN and there was also some discussion on a bug with Windows 2012 when using E1000 that causes data corruption. As we use VMXNET3, we haven’t seen this thankfully, but one to be aware of.

As usual, thanks to VMUG leaders Steve Lester and Nathan Byrne and sponsors Dell and 1st Easy for another super event. The vBeers afterwards were good fun and those mini fish and chips portions were very popular!

 

06-03-14

VCAP-DTA – Objective 3.2 – Configure and Manage Pool Tags and Policies

This objective is relatively short and only has one skill being measured, the ability to correctly configure tags. As a refresher, tags can be used to provide a level of security on connection servers and pools and gives the ability to provide what VMware refers to as “Restricted Entitlement”, which means Connection Servers can only access certain pools. The most obvious and common use case for tagging is when Security Servers are in play, and you want to restrict incoming users from the internet to only use particular Connection Servers.

So then, with only one skill/ability being measured in this section, let’s get to it!

  • Configure tagging for specific Connection Server or security server access – Tagging is done from within View Administrator. You can set tags on Connection Servers and also on pools. One thing you need to be aware of is tag matching – this defines whether or not a user is permitted access to a desktop and will most likely be something you’ll be tested on in the exam.
    • To set a tag on a Connection Server, go to View Administrator and View Configuration, Servers, Connection Servers, choose your Connection Server, click Edit and in the top box, assign the tags you want to use. The example below illustrates two tags in use. This is an internal Connection Server, so it’s been tagged as “Internal” and “Secure”. Note a comma separating multiple tags.

tags

    • To add tags to an existing pool, in View Administrator go to Inventory, Pools, select the Pool you wish to tag, click Edit and then Pool Settings. At the top of this screen is General and Connection Server Restrictions. Click Browse and click the Restricted to these tags radio button. Select the appropriate tag as per below :-

pool-tags

    • Click OK to apply the setting.
    • To apply a tag during pool creation, when you get to the Pool Settings screen, you basically access the same dialog screen. So under the General heading at the top, go to Connection Server Restrictions, click Browse and select the appropriate tag as shown above.
  • In respect of tag matching, be aware of the following matrix as you may be asked to troubleshoot an access issue during the exam which may be caused by incorrect tagging :-
    • Connection Server no tags – Pool no tags – access permitted
    • Connection Server no tags – Pool tags  – one or more tags – access denied
    • Connection Server one or more tags – Pool no tags – access permitted
    • Connection Server one or more tags – Pool one or more tags – access depends on tags matching

VCAP-DTA – Objective 3.3 – Administer View Desktop Pools

This objective is the guts of spinning up virtual desktops for users, and covers the full range of desktop pool types available. So full and linked clone pools, assignment types, Terminal Services or manual pools, user and group entitlements and finally refreshing, recomposing and rebalancing pools. Sounds like a lot, but actually there’s a nice flow to this objective and it should be quite straight forward.

  • Create and modify full or linked-clone pools – To create a new pool in View Administrator, go to  Inventory, Pools, Add. The pool creation wizard is generally pretty easy to follow and there’s not much value I can to it here. Click Next until you reach the third screen of the wizard, entitled vCenter Server. This screen provides the option for Full virtual machines or View Composer Linked Clones. Select the appropriate radio button for the type you want and continue on through the screens to finish the pool creation wizard. The choice selection screen is shown below :-

pool-type

    • To modify an existing pool, go to Inventory, Pools, select the pool you are interested in and click Edit. You can change various settings on an existing pool, such as the pool display name, remote protocol settings, power management, storage accelerator etc. You cannot change the pool type once it has been created.
  • Create and modify dedicated or floating Pools – To create a floating pool, you can only select Automated Pool or Manual Pool in the initial pool definition type screen. When you click Next, you then get presented with the choice of creating a Dedicated or Floating pool. Remember dedicated pools mean once a user is assigned a desktop, they own it “forever” whereas a floating pool is in essence the “next cab off the rank” and is not persistently tied to a single user. Each type has their own use case. From here, complete the wizard with the required settings to provision the pool.
    • To modify an existing pool, go to Inventory, Pools and select the pool you wish to modify. Click Edit and make changes as appropriate. With a dedicated pool, your only option is to enable/disable automatic assignment. A floating pool has additional options for editing settings, including vCenter Settings (changing datastores etc.) and also Guest Customizations.
  • Build and maintain Terminal Server or manual desktop pools – Manual and Terminal Services pools are an extension of View by adding in the View Agent to an existing virtual machine, Terminal Server or even a physical PC or blade PC.
    • To add a manual pool, ensure the agent is installed on the endpoint (and you may be tested on this!), go to Inventory, Pools, Add, Manual Pool. Again the wizard is pretty straight forward, populate all the settings you need.
    • To add a Terminal Services pool, again make sure the View Agent is installed on the endpoint before you proceed.
  • Entitle or remove users and groups to or from pools – Once you’ve built your pools, you also need to add an entitlement. This is simply users and/or groups from Active Directory that you want to grant access to desktops to. This can be done in one of two ways – either when the pool is created (final wizard screen, tick the box for entitle users after this wizard finishes) or afterwards if you forget during pool creation, or if you want to add additional users or groups. If you select to entitle on completion, click Add and use the search box to find the users or groups you want to entitle, as shown below :-

entitlements

    • To add entitlements retrospectively, go to Inventory, Pools, Entitlements and this brings you into the same dialog as above where you simply repeat the same steps to add users and/or groups.
  • Refresh, recompose or rebalance pools – Depending on your design or operational procedures (or if you’re asked to by the exam!), you will need to refresh, recompose or rebalance your desktop pools. As a refresher, this is what each term means :-
    • Refresh – Reverts the OS disk back to the original snapshot of the clone’s OS disk
    • Recompose – Simultaneously updates all linked clone machines from the anchored parent VM, so think Service Pack rollout as a potential use case
    • Rebalance – Evenly redistributes linked clone desktops among available datastores
    • To perform these operations, the desktops must be in a logged off state with no users connected. Go to View Administrator, Inventory, Pools and select the pool you want to manage. Under the Settings tab, click the View Composer button and choose the operation – refresh, rebalance or recompose
    • When you choose the refresh action, you specify when you want the task to run and whether you want to force users to log off or wait for them to log off. You can also specify a logoff time and message, this is customisable from Global Settings. Check your settings and hit Finish to start the operation.
    • When you select recompose, select the snapshot you want to use and whether or not to change the default image for new desktops. Again run through the scheduling page and choose your settings, click Next and Finish.
    • When you select rebalance, you simply fill out the scheduling page and click Finish.
    • Remember if you’re asked to set a custom logoff message, this is done from View Configuration, Global Settings, Display warning before forced logoff.

02-03-14

VCAP-DTA – Objective 3.1 – Configure Pool Storage for Optimal Performance

So this objective sees us moving into section 3 which is entitled “Deploy, Manage, and Customize Pool Implementations”. This objective deals with how we use storage tiers for different virtual disks and use cases, and the sub settings within them. So as usual, let’s run through the skills and abilities for this objective :-

  • Implement storage tiers – When creating a Composer based pool, select the option in the Storage Optimization wizard screen to separate out disks to different datastores. Depending on the exam scenario, you may be asked to separate the Persistent Disks and/or the Replica Disks. Depending on what you select, when you click Next you will get a differing set of options. Assuming you select both, on the vCenter Settings screen, use options 6, 7 and optionally 8  to choose which datastores are used and for which purpose. Once you have completed your choices, complete the wizard out to create the pool.
  • Optimize placement of replica virtual machine – The replica disk is the disk that gets hammered for read read requests from users, so you will be asked to place this on high performance storage, most likely SSD. Using the steps detailed above, use the vCenter Settings screen of the pool wizard to choose a high performance datastore for the replica disk. The diagram below illustrates this point.

replica-ds

  • Configure disposable files and persistent disks – Again this is selected in the pool wizard. You can see from above that there is a View Composer Disks section. This defines how disposable (so think temp files) and persistent disk (user profile) are handled. So for the Persistent Disk, you can select a disk size and drive letter and to redirect the user profile to this disk. The same goes for the Disposable Disk, select the size, whether or not to redirect and which drive letter to use. See below for an illustration of this.

composer-disks

  • Configure and optimize storage for floating or dedicated pools – This is pretty much covered by the first section, Implement Storage Tiers.
  • Configure overcommit settings –  This setting is used when using View Composer. The purpose of overcommit is to allow more disks to be created than physical space exists on the datastore. This is because the disks are sparse disks  on the datastore. The choices for overcommit are None (x0), Conservative (x4, default), Moderate (x7) and Aggressive (x15).  Select the datastore and choose the level of overcommitment from the drop down menu. These choices are only available for OS and Persistent Disks. See below for an example of the dialog.

overcommit

  • Determine implications of using local or shared storage – So in most cases you will be looking to use shared storage, but there may be occasions (and exam scenarios) where you will be asked to use local storage (or it’s use is implied by the question). Bear the following in mind from the View Administration Guide :-
    • You cannot load-balance virtual machines across a resource pool. For example, you cannot use the View Composer rebalance operation with linked-clones that are stored on datastores
    • You cannot use VMware High Availability
    • You cannot use the vSphere Distributed Resource Scheduler (DRS)
    • You cannot store a View Composer replica and linked clones on separate datastores if the replica is on a local datastore
    • When you store linked clones on datastores, VMware strongly recommends that you store the replica on the same volume as the linked clones. Although it is possible to store linked clones on local datastores and the replica on a shared datastore if all ESXi hosts in the cluster can access the replica, VMware does not recommend this configuration
    • If you use floating assignments and perform regular refresh and delete operations, you can successfully deploy linked clones to local datastores.
  • Configure View Storage Accelerator and regeneration cycle – The View Storage Accelerator is also known as the Content Based Read Cache (CBRC) on the ESXi host. This is especially useful as common read based requests are cached into host RAM and is useful for use cases such as desktop boot storms. Configuration is pretty simple – in the pool creation wizard you make your choices in the Advanced Storage Options screen. Check the box to Use View Storage Accelerator, choose between OS Disks  or OS and Persistent Disks. The default is OS disks as this is the usual use case. You also have the option to set a default value for Regenerate Storage Accelerator after days. This basically creates new indexes of the disks and stores them in the digest file for each VM. It’s also worth noting you can configure blackout periods when storage accelerator regeneration will not be run. An obvious example is to suspend this during backups. You may be asked this in the exam. See below for an example.

cbrc

22-02-14

VCAP-DTA – Objective 2.5 – Configure Location Based Printing

So we come to the final objective in section 2, configuring location based printing. In essence, this is harnessing the abilities of ThinPrint to enable printing from the View environment, using physical printers located nearby to the end users. There are three measured skills and abilities in this section, and are listed below.

  • Configure location-based printing using a Group Policy Object – To start with, you need to register the ThinPrint DLL on an Active Directory server to enable the functionality within MMC. To do this, go to any of your Connection Servers and find the file TPVMGPoACmap.dll. There are both 32 bit and 64 bit versions. This file is located under C:\Program Files\VMware\VMware View\Server\extras\GroupPolicyFiles\ThinPrint.
    • Copy TPVMGPoACmap.dll to the Active Directory server (choose the appropriate version, 32/64 bit)
    • Register the DLL by running regsvr32 “C:\TPVMGPoACmap.dll” from a command prompt
    • Start Group Policy Management from Administrative Tools on an Active Directory server
    • Either create and link a new GPO or edit an existing one (depending on the exam scenario)
    • Go to Computer Configuration, Policies, Software Settings and Configure AutoConnect Map Additional Printers.
    • Ensure to select the Enabled radio button to start entering entries into the mapping table. Remember that selecting Disable without saving first will delete all of your printers!
    • Printer mappings can be used to map printers depending on certain rules, as per the example dialog below

 

thinprint

 

    • You will also need to know the syntax of each column for settings to become effective :-
      • IP Range – 10.10.1.1-10.10.1.50, for example. Or you can use an entire subnet, e.g. 10.10.1.0/24. You can also use an asterisk as a wildcard.
      • Client Name – So in the above example, PC01 maps a specific printer “Printer2”, again an asterisk is used as a wildcard.
      • Mac Address – Use the hyphenated format 01-02-03-04-05-CD for Windows and colons for Linux clients, so 01:02:03:04:05:CD.
      • User/Group – Map a specific printer to a specific user or group, such as jsmith or Finance.
      • Printer Name – This is the printer name as shown in the View session. The name doesn’t have to match names on the client system.
      • Printer Driver – Simply the printer driver name in Windows. This driver must be installed on the desktop.
      • IP Port/ThinPrint Port – the IP address of a networked printer to connect to, must be prepended with “IP”, so IP_192.168.0.50 for example.
      • Default – Whether this printer is the default printer.

 

21-02-14

VCAP-DTA – Objective 2.4 – Backup and Restore View Environment

Now to a key exam objective in my opinion. Like any application, a backup is only as effective as it’s restoral, if that makes sense. Or in other words, if you back something up but don’t know how to put it back under a disaster recovery situation, then your backup is about as useful as an ashtray on a motorbike.

The blueprint cites the administration guide, the View Administrator console and vdmexport.exe as the key touchpoints for this objective, so without further ado, let’s get into the skills and abilities tested :-

  • Backup the View Composer database – This is just a general bullet point and is non specific about how to backup View components. There are basically two ways – via View Administrator and via command line using vdmexport.exe. Either way, you can get backups of both View Manager and View Composer data.
  • Backup LDIF or SVI using View Administrator – 
  • To backup immediately from View Administrator, go to View Configuration, Servers, Connection Servers, select a Connection Server (remember the ADAM database is replicated) and select  Backup Now. If the exam asks you to set a custom schedule on the automatic backup, go to Edit, select the Backup tab and choose the appropriate options. Also note here the save path for backups, you may be asked to change this too. If you quickly browse to this folder, you should see LDF and SVI backup files, formerly for your View Manager configuration, latterly for View Composer.
  • Backup LDIF using vdmexport – This item is specifically geared to backing up the View Manager configuration rather than both View and Composer.
    • You need to know where vdmexport.exe is – it’s located in C:\Program Files\VMware\VMware View\Server\tools\bin
    • To backup to LDIF, run vdmexport -f viewbackup.ldf
    • Also know what the switches do. -f specifies file name, -v specifies verbatim (plain text format) mode and -c cleanses the backup file, removing passwords and sensitive data. You shouldn’t restore from a cleansed file, so I don’t expect the exam to ask you to do this. The -v and -c switches are added after the main backup command, so vdmexport.exe -f backup.ldf -v for example.
  • Restore a View environment from a backup – To restore data from backup, you use the vdmimport.exe tool. This is kept in the same folder as the export tool, noted above.
    • The import process essentially has two steps – first you need to decrypt the backup file and you then need to import it back into View. To do this, run vdmimport -d -p password -f backupfile.LDF > decryptedbackup.LDF. Omitting the -p  switch will prompt you for the password, if you don’t want to type it clear text.
    • To import the backup, run vdmimport -f decryptedbackup.LDF.
    • Restoring Composer is slightly more involved, as we have to put data back into SQL/Oracle remember. Backup file names for Composer have an .svi extension and are also date stamped. This factor may come into play in the exam (e.g. restore Composer from June 5th)
    • Copy the .svi backup file from a Connection Server to the server running View Composer
    • Stop the Composer service so the database is not being written to as we restore
    • We use the sviconfig.exe utility to restore the data to Composer, this is stored in C:\Program Files\VMware\VMware View Composer\sviconfig.exe (may also be located under C:\Program Files(x86) if you can’t find it).
    • sviconfig.exe has five switches, and you need to know them all for a successful restore! -operation, -DSNname, Username, Password, BackupFilePath. -operation tells the utility we want to restore, -DSNname is the database source name defined under Data Sources in Windows Control Panel, Username is the database administrator account (so not a View administrator but one you used when creating the database), -password is the database administrator’s password and -backupfilepath is where the target .svi backup file is located.
    • Putting all of that together, the command would look like this :-
      • sviconfig.exe -operation=restoredata -dsnname=ComposerDB -username=ComposerDBO -password=P@ssword123 -backupfilepath=C:\Backup-20140221142435-vCenter.SVI
    • Running sviconfig.exe at the command line will give you -operation values but little else, so if you can’t remember the other four switches, you may need to quickly lean on the Administration Guide PDF. If you basically think “database”, then you should be OK – so, DSN name, user, password and of course backup file. Actually quite straightforward.

16-02-14

VCAP-DTA Objective 2.3 – Configure Syslog and View Events Database

Time for another relatively short section, this objective deals with logging. Skills and abilities from the blueprint :-

  • Configure Events database – This is relatively short and easy task, provided you remember something which I think is a little quirky about this task. Usually when you hook up vCenter/VMware components up to a database (and in my case I’m going to say SQL as I know it best), you configure an ODBC connection from within Control Panel in Windows. For events logging in View, you don’t use this. You still need to configure a database using Management Studio or the Oracle equivalent and create a user with access to write to this database. My supposition is that this will be done in advance for you in the exam, there is no mention of SQL management skills on the blueprint.
    • Before you start, you will need to know the DNS name or IP address of the database server, plus the port number (1433 SQL/1521 Oracle). Also the database name and a prefix for the events tables. You also need SQL to be configured to use SQL authentication and not Windows authentication, but it’s highly unlikely you’ll have to configure that.
    • Go to View Administrator and in the left hand column, click View Configuration and then Event Configuration.
    • In the Event Database section, click Edit and fill out the fields for Database server (IP address/DNS name), Database type (SQL/Oracle), port (1433 SQL/1521 Oracle), Database name, Username, Password, Confirm Password and Table Prefix. Once completed, click OK.
    • You may also have to set Event Settings using the dialog below. There are two settings here, click Edit and select Show Events in View Administrator for and select the appropriate value from the drop down. You can also edit Classify events as new for if asked to on the exam. Chances are you will!
    • Click OK to save away the settings. To confirm successful configuration, click Monitoring in the left hand column and select Events, you should see event messages stacking up in the database. If you don’t, go back and check your settings.
  • Configure Syslog using vdmadmin – This objective is interesting as it purposely states you have to set syslog logging from the command line, rather than View Administrator. There is a typo in version 1.5 of the exam blueprint that says to use “vdiadmin”. Don’t get confused here, this command doesn’t exist.
    • vdmadmin.exe can be found in C:\Program Files\VMware\VMware View\Server\tools\bin and a quick way to see your options is to type vdmadmin -help. This will give you a big list of commands and switches and it’s easy to see what they all do.
    • To configure syslog events to a remote server, type vdmadmin -I -eventSyslog -enable -path \\logserver\share\ViewEvents -user mydomain\myuser -password mypassword

    • You can also configure local logging if the syslog server is on the Connection Server by typing vdmadmin -I -eventSyslog -enable -localOnly
    • To log to a specified path, use vdmadmin -I -eventSyslog -enable -path path
    • To disable syslog, use vdmadmin -I -eventSyslog -disable

15-02-14

VCAP-DTA Objective 2.2 – Configure Administrator Roles and Permissions

Continuing on from the previous objective of setting global policies, the next objective on the blueprint calls on skills required to configure Administrator roles and permissions. The source reference for this section is again the View Administration guide and can all be done via View Administrator portal. If you’re a regular View admin, you should find this section reasonably straight forward. So to the skills and abilities :-

  • Create, modify and delete administrator roles – Roles work in much the same way as they do in vCenter. Create a role, assign it a set of permissions and add users/groups to the role. There are some pre-defined (Administrators for example) which may be just fine for what you want, but you can be sure the exam will be looking for you to be more granular than that.
    • Go to View Administrator, and in the left pane, click View Configuration and then Administrators. There are three tabs that can be accessed across the top, Administrators and Groups, Roles and Folders. In the exam, it’s quite possible you may be required to add a privilege and/or permission to a built in role as well as creating a new role. 
    • To create a new role, go to the Roles tab and click Add Role. Give the role a name, an optional description and then tick the privileges you want to be able to assign out.
  • Add and remove user permissions – Worth double checking what these privileges do, such as Register Agent, as you may be asked in the exam to add a non vCenter source such as Terminal Services. Similar steps to delete roles, if you’re asked to.
    • To modify a role, go into the  Roles tab of the Administrators view, click on the custom role and click Edit to add or remove privileges. Remember you can’t edit a built in role, but you can assign permissions
    • To add a folder, click the Folders tab and click Add Folders. Give the folder a name and an optional description.
  • Assign and Manage permissions on View folders – To add a permission to the folder, click Add Permission, find the AD user or group to add, click Next  and select the role you wish to assign them. Click Finish. This will then add an AD group or user to a folder with a set of privileges. To add a pool to a folder for administration, to to the pool and select Edit. Choose the folder you wish to assign to this pool and click OK. This will now mean you can delegate management of this pool to a role you just created.

Another short section, but roles and permissions is a relatively short topic.