15-04-16

VMware VCAP6-DTM Design – Exam Experience

VMW-LGO-CERT-ADV-PRO-6-DSKTP-DESIGN-K

I just got back from sitting the beta of the VCAP6-DTM Design exam, so I thought I would give a bit of feedback for anyone thinking of doing it any point in the future. Obviously the caveat to this post is that the exam today was a beta (so still very much in development) and also that it’s still under NDA, so no real specifics, I’m afraid.

The exam itself was 38 questions over 4 hours, although I completed it with about an hour to spare. I got the invite a couple of weeks ago and thought “why not?”. It’s only eighty quid, and you don’t often get the chance to sit a VCAP for that low fee.

The design exam takes the form of drag and drop and the design canvas questions. I kind of felt under no real pressure to deliver on this exam – I’m not currently doing much in the way of the VMware stack, so it was almost a bit of fun. I remember sitting the VCAP5-DTD (as was) and feeling a lot more time pressured and knowledge pressured, but I reckoned it up and it was over three years ago now! Time flies, and I’m certainly much more experienced, not just as an architect but also with View.

I think in the released exam, you only get 6 design canvas questions, but in today’s beta I got a lot more than that! I can’t recall exactly how many, but there were at least a dozen, I’d say. I’m not sure if that was just a data gathering exercise or if that is the way the exam will go, but best to know your reference architectures if you’re planning to sit this exam later in the year.

The exam also seemed to be much more in tune with the way the VCDX is done, in respect of assumptions, constraints and risks and also requirements. You also need to understand the differences between logical, conceptual and physical designs and also functional and non-functional requirements. I think this exam will prepare you much better for a VCDX crack, I can’t honestly remember if the original VCAP5-DTD ran along those lines.

In terms of tech, a good chunk of the exam is made up of existing View technologies, so understand all the core components well:-

  • Connection Servers
  • Security Servers
  • Desktop Pools
  • Full and Linked Clone Desktops
  • 3D Graphics
  • ThinApp
  • RDSH (quite a lot of content on that)
  • View Pods
  • Pod and Block Architecture
  • Workspace

I’ll be honest and state right now I’ve never touched AppVolumes or Mirage, less seen it in the field. I spent a chunk of time over the last couple of days looking at some of the linked documentation from the exam blueprint, such as reference architectures, use cases and also the product documentation.

As it’s a design exam, it takes an architectural approach so you don’t need to know which vdmadmin command to run to perform a given task, for example. What you do need to know is what components do what, how they link with each other and what he dependencies are. It’s a lot more in depth than a VCP, but if you have spent any time in the field doing a requirements analysis and then a subsequent design and delivery, you should be fine.

I didn’t take a lot of care with my answers in the sense that I didn’t really agonise over them. I did check them before I moved on, but as I said, I felt no pressure and I really just went with my gut instinct. In most cases, that’s usually the right way.

In terms of non-View components, I’d say you need to know and understand the high level architectures of AppVolumes and Mirage. I can’t recall any questions on the Immidio product, so maybe that didn’t make the cut or maybe my question pool just didn’t contain any. Latterly though, I did get some questions that referred to the “traditional” Persona Management. Wouldn’t hurt to have a basic understanding of Immidio though (or whatever it’s called these days).

There are a few questions where you need to count your fingers – there is no access in the exam to a calculator, which is a massive pain in the arse. Microsoft exams always have it, not sure why VMware seem intent on exam candidates getting their fingers and toes out. Let’s be honest, you wouldn’t do that in the field, would you? I did comment back that a calc would be very handy for someone like me who is incredibly lazy when it comes to arithmetic!

So to sum up, not massively different from the VCAP5-DTD I remember, with core View still very heavily tested. As I mentioned previously, make sure you have a good working knowledge of AppVolumes and Mirage in terms of the architecture and what the component roles are. Probably wouldn’t do any harm to understand and remember what ports are used in which scenarios, either. Configuration maximums too – you’ll need to know how many users a given component will support when designing a solution for a specific number of users.

I won’t get the results now until 30th June or so (that’s what the beta exam page says, anyway), so we’ll see. Do I think I’ve passed? Who knows. I’ve given up predicting things like that after I did the VCP-CMA beta thinking I’d done well, only to crash and burn. It has no massive effect on me anyway, as I’m currently 100% focused on AWS and Azure, but it would be nice to top up my collection of VCAPs further. As always, any questions, hit me up on Twitter but just don’t ask for any exam question content specifics.

Links

18-12-15

Amazon Web Services – A Technical Primer for VMware Admins

aws

Yes, yes, I know. Long time no blog. Still, isn’t it meant to be about quality and not quantity? That could spawn a million dirty jokes, so let’s leave it there. So to the matter in hand. Recently I’ve been working on a project that’s required me to have a much closer look at Amazon Web Services (or AWS for the lazy). I think probably like most I’ve heard the name and in my head just thought of it as web servers in the cloud and probably not much more than that. How I was wrong.

However, like most “cloud” concepts, because ultimately it’s based on the idea of virtualisation, it’s actually not that hard to get your head around what’s what and how AWS could be a useful addition to your armoury of solutions for all sorts of use cases. So with that in mind, I thought it would be really useful to put together a short article for folks who are dyed in the wool vSphere admins who might need to add an AWS string to their bow at some time in the near future. Let’s get started.

As you can see from the picture below, logging into the AWS console gives us a bewildering array of services from which to pick, most of which have exotic and funky names such as “Elastic Beanstalk” and “Route 53”. What I’m going to try and do here is to separate out (at a high level) the services AWS offers and how they kind of map into a vSphere world.

aws-1

The AWS Console

Elastic Compute Cloud (EC2)

Arguably the main foundation of AWS, EC2 is the infrastructure as a service element. Herein comes the first of the differences. We no longer refer to the VMs as VMs, but we now refer to them as “instances”. In much the same way we might define it in vRealize or vCD, there are sizes of instances, from nano up to 8 x extra large, which should cater for most use cases. Each instance type has varying sizes of RAM, numbers of vCPUs and also workload optimisations, such as “Compute Optimised” or “Storage Optimised”.

Additionally, instance images are referred to as AMIs, which stands for “Amazon Machine Image”. Similar in concept I suppose to an OVA or OVF. It’s a pre-packaged virtual machine image that can be picked from the service catalog to provision services for end users. As you might expect, AMIs include both Windows and Linux platforms and there is also an AWS Marketplace from where you can trial or purchase pre-packaged AMIs for specific applications or services. In the example screen shot below, you can see that when we go into the “Launch Instance” wizard (think “create a new VM”) we can choose from both Amazon’s service catalog but also the AWS Marketplace. Why re-invent the wheel? If the vendor has pre-packaged it for you, you can trial it and also use it on a pay-as-you-go basis.

aws-2

As you can see above, there is a huge amount from which to pick, and it’s very much the same in concept as the VMware Solution Exchange. What’s notable here is the billing concept. Whereas with vSphere we might be thinking in terms of a one off cost for a licence, with AWS, we need to start thinking about perpetual monthly billing cycles, which will also dictate whether or not AWS is suitable and represents value for money.

You can also take an existing AMI, perform some customisation on it (install your application for example) and then save this as an AMI that you can use to create new instances, but these AMIs are only visible to you, not others. I suppose the closest match to this is a template in vCenter. So again, many similarities, just different terminology and slight differences in workflows etc.

It’s also worth adding at this point before I move properly onto storage that the main storage platform is called EBS, or Elastic Block Storage. It’s Elastic because it can expand and contract, it’s Block because..well, it’s block level storage (think iSCSI, SAN etc.) and Storage because, well it’s storage. At this level, you don’t deal with LUNs and datastores, you just deal with the concept of an unlimited pool of storage, albeit with different definitions. In this sense, it’s similar to the vSphere concept of Storage Profiles.

Storage Profiles can help an administrator place workloads on the appropriate type of storage to ensure consistent and predictable performance. In AWS’s case, you have a choice of three – General Purpose, Provisioned IOPS and Magnetic. More on this in the storage section, but remember that EBS storage is persistent, so when an instance is restarted or powered off, the data remains. You can also add disks to an instance using EBS, for example if you wanted to create a software RAID within your instance.

You may also see references to Instance Storage. This is basically using storage on the host itself, rather than enterprise grade EBS storage. This type of storage is entirely transitory and only lasts for the lifetime of the instance section. Once the instance is powered off or destroyed (terminated in AWS parlance), the storage goes with it. Remember that!

One of the good things about EBS is that in the main, SSD storage is used. General Purpose is SSD and is used for exactly that. Provisioned IOPS is used mainly for high I/O workloads such as databases and messaging servers and Magnetic is spinning disk, so the cheapest of the cheapest and used for workloads with modest I/O requirements.

Amazon S3

So to another service with an exotic hipster name, Amazon S3. This stands for Simple Storage Service and is Amazon’s main storage service. This differs from EBS as it’s an object based file service, rather than block based, which I suppose is more like what vSphere admins are used to.

Amazon refers to S3 locations as “buckets”, and it’s easy to think of them as a bunch of folders. You can have as many buckets as you like and again this storage is persistent. You can upload and download content, set permissions and even publish static websites from an S3 bucket. It’s also worth noting that bucket contents are highly available by way of replication across the region availability zones, but more about that later. By using IAM (Identity and Access Management) you can allow newly provisioned instances to copy content from an S3 bucket say into a web server content directory when they are provisioned, so you are good to go as soon as the instance is.

You can also have versioning, multi-factor authentication and lifecycle policies, but that’s beyond the scope of this article.

It’s not easy to map S3 to a vSphere concept, so we’ll leave it here for now, but at least you know in broad terms what S3 is.

AWS Networking

One thing that AWS does very well (or very frustratingly, depending on your viewpoint) is hiding the complexity of networking  and simplifying into a couple of key concepts and wizards.

In vSphere, we have the concepts of vSwitches, VDSes, port groups, VLAN tags, etc. In AWS, you pick a VPC (more on that later), a subnet and whether or not you want it to have an internet facing IP address. That’s pretty much it.

In terms of configuring the networking environment, when you sign up to AWS you get a default VPC, this stands for “Virtual Private Cloud” and is what is says it is – your own little bubble inside of AWS that nobody can see but you (analogous to a vCloud Director Organisational DC). You can add your own VPCs (up to a limit of 5, for now) if you want to silo off different departments or lines of business, for example. Think of a VPC as your vCenter view, but without clusters. VPCs operate pretty much on a simple, flat management model. If you have a PluralSight sub, it’s a good idea to check out Nigel Poulton’s VPC videos for a much better insight on how this all works.

VPCs don’t talk to each other by default, but you can link them together (and link VPCs from other AWS accounts if you want to). Again, it’s difficult to map this to a vSphere concept,  but this helps explain what a VPC is.

Each instance will get an internal RFC 1918 type network address (say 10.x or 192.168.x, depending how CIDR blocks are configured) and those instances requiring external IP addresses will have this added transparently, so basically NAT because the VM does not know about the external facing address. I know it sounds a bit complicated, but actually it’s not, I’m just not good at explaining it!

Availability Zones

One last concept to cover is Availability Zones (AZ). Generally there are three per region, and right now there are 11 regions worldwide. You can put workloads wherever you like, but if you want to add things like Elastic Load Balancer, you can’t just scatter gun your instances all over the planet.

An AZ in it’s most basic sense is a physical data centre, so easy to understand from a vSphere perspective. However, in AWS, as there are three AZs per region connected together via high speed, low latency network links, services such as S3 and Elastic Load Balancer (ELB) can take advantage of this. The region is the logical boundary for these services and means that S3 data is replicated around all AZs in the region and load balanced services that sit behind a single ELB can be placed in all three AZs if need be. All of this is configured by default, you don’t need to do anything yourself to let this magic happen.

Managing AWS from vCenter

In all the AWS concepts I’ve mentioned so far, I’ve discussed how things are done from the AWS web console. It’s also possible to manage and migrate VMs to AWS from vCenter Server, this is done with the AWS Management Portal. I haven’t yet tried it, but when I do, I’ll come back and write an article about it. This is a key piece of the puzzle though, as it allows “single pane of glass” management for vSphere and AWS.

In Conclusion

Hopefully this has been a useful primer in mapping AWS concepts to vSphere ones. There are lots of services and constructs that are unique to AWS that don’t necessarily map back, but it’s still important to know what they are. I’ve summarised some of the mappings in the table below (and not all of them are directly 1-1 in concept), hopefully I can add more articles in the coming weeks.

Availability Zone = Data Centre (physical)

VPC = Datacenter (vCenter logical)

EBS = Storage Profiles (similar, but not exactly the same)

Instance = Virtual Machine

AMI = OVA/OVF

 

 

13-10-15

VMworld Europe Day Two

Today is pretty much the day the whole conference springs to life. All the remaining delegates join the party with the TAM and Partner delegates. The Solutions Exchange opened for business and there’s just a much bigger bustle about the place than there was yesterday.

The opening general session was hosted by Carl Eschenbach, and credit to him for getting straight in there and talking about the Dell deal. I think most are scratching their heads, wondering what this means in the broader scheme of things, but Carl reassured the delegates that it would still be ‘business as usual’ with VMware acting as an independent entity. That’s not strictly true, as they’re still part of the EMC Federation, who are being acquired by Dell, so not exactly the same.

Even Michael Dell was wheeled out to give a video address to the conference to try and soothe any nerves, giving one of those award ceremony ‘sorry I can’t be there’ speeches. Can’t say it changed my perspective much!

The event itself continues to grow. This year there are 10,000 delegates from 96 countries and a couple of thousand partners.

Into the guts of the content, first up were Telefonica and Novamedia. The former are a pretty well known European telco, and the latter are a multinational lottery company. The gist of the chat was that VMware solutions (vCloud, NSX etc) have allowed both companies to bring new services and solutions to market far quicker than previously. In Novamedia’s case, they built 4 new data centres and had them up and running in a year. I was most impressed by Jan from Novamedia’s comment ‘Be bold, be innovative, be aggressive’. A man after my own heart!

VMware’s reasonably new CTO Ray O’Farrell then came out and with Kit Colbert discussed the ideas behind cloud native applications and support for containers. I’ll be honest at this point and say that I don’t get the container hype, but that’s probably due in no small part to my lack of understanding of the fundamentals and the use cases. I will do more to learn more, but for now, it looks like a bunch of isolated processes on a Linux box to me. What an old cynic!

VMware have taken to approaches to support containers. The first is to extend vSphere to use vSphere Integrated Containers and the second is the Photon platform. The issue with containerised applications is that the vSphere administrator has no visibility into them. It just looks and acts like a VM. With VIC, there are additional plug-ins into the vSphere Web Client that allow the administrator to view which processes are in use, on which host and how it is performing. All of this management layer is invisible and non-intrusive to the developer.

The concept of ‘jeVM’ was discussed, which is ‘just enough VM’, a smaller footprint for container based environments. Where VIC is a Linux VM on vSphere, the Photon platform is essentially a microvisor on the physical host, serving up resource to containersa running Photon OS, which is a custom VMware Linux build. The Photon platform itself contains two objects – a controller and the platform itself. The former will be open sourced in the next few weeks (aka free!) But the platform itself will be subscription only from VMware. I’d like to understand how that breaks down a bit better.

VRealize Automation 7 was also announced, which I had no visibility of, so that was a nice surprise. There was a quick demo with Yangbing Li showing off a few drag and drop canvas for advanced service blueprints. I was hoping this release would do away with the need for the Windows IaaS VM(s), but I’m reliably informed this is not the case.

Finally, we were treated with a cross cloud vMotion, which was announced as an industry first. VMs were migrated from a local vSphere instance to a vCloud Air DC in the UK and vice versa. This is made possible by ‘stretching’ the Layer 21 network between the host site and the vCloud Air DC. This link also includes full encryption and bandwidth optimisation. The benefit here is that again, it’s all managed from a familiar place (vSphere Web Client) and the cross cloud vMotion is just the migration wizard with a couple of extra choices for source and destination.

I left the general session with overriding feeling that VMware really are light years ahead in the virtualisation market, not just on premises solutions but hybrid too. They’ve embraced all cloud providers, and the solutions are better for it. Light years ahead of Microsoft in my opinion, and VMware have really raised their game in the last couple of years.

My first breakout session of the day was Distributed Switch Best Practices. This was a pretty good session as I’ve really become an NSX fanboy in the last few months, and VDSes are the bedrock of moving packet between VMs. As such, I noted the following:-

  • DV port group still has a one to one mapping to a VLAN
  • There may be multiple VTEPS on a single host. A DV port group is created for all VTEPs
  • DV port group is now called a logical switch when backed by VXLAN
  • Avoid single point of failure
  • Use separate network devices (i.e switches) wherever possible
  • Up to 32 uplinks possible
  • Recommend 2 x 10v Gbps links,  rather than lots of 1 Gbps
  • Don’t dedicate physical up links for management when connectivity is limited and enable NIOC
  • VXLAN compatible NIC recommended, so hardware offload can be used
  • Configure port fast and BPDU on switch ports, DVS does not have STP
  • Always try to pin traffic to a single NIC to reduce risk of out of order traffic
  • Traffic for VTEPs only using single up link in an active passive configuration
  • Use source based hashing. Good spread of VM traffic and simple configuration
  • Myth that VM traffic visibility is lost with NSX
  • Net flow, port mirroring, VXLAN ping tests connections between VTEPs
  • Trace flow introduced with NSX 6.2
  • Packets are specially tagged for monitoring, reporting back to NSX controller
  • Trace flow is in vSphere Web client
  • Host level packet capture from the CLI
  • VDS portgroup, vmknic or up link level, export as pcap for Wireshark analysis
  • Use DFW
  • Use jumbo frames
  • Mark DSCP value on VXLAN encapsulation for Quality of Service

For my final session of the dayt, I attended The Practical Path to NSX and Network Virtualisation. At first I was a bit dubious about this session as the first 20 minutes or so just went over old ground of what NSX was, and what all the pieces were, but I’m glad I stayed with it, as I got a few pearls of wisdom from it.

  • Customer used NSX for PCI compliance, move VM across data center and keep security. No modification to network design and must work with existing security products
  • Defined security groups for VMs based on role or application
  • Used NSX API for custom monitoring dashboards
  • Use tagging to classify workloads into the right security groups
  • Used distributed objects, vRealize for automation and integration into Palo Alto and Splunk
  • Classic brownfield design
  • Used NSX to secure Windows 2003 by isolating VMs, applying firewall rules and redirecting Windows 2003 traffic to Trend Micro IDS/IPS
  • Extend DC across sites at layer 3 using encapsulation but shown as same logical switch to admin
  • Customer used NSX for metro cluster
  • Trace flow will show which firewall rule dropped the packet
  • VROps shows NSX health and also logical and physical paths for troubleshooting

It was really cool to see how NSX could be used to secure Windows 2003 workloads that could not be upgraded but still needed to be controlled on the network. I must be honest, I hadn’t considered this use case, and better still, it could be done with a few clicks in a few minutes with no downtime!

NSX rocks!

 

 

 

12-10-15

VMworld Europe Day One

Today saw the start of VMworld Europe in Barcelona, with today being primarily for partners and TAM customers (usually some of the bigger end users). However, that doesn’t mean that the place is quiet, far from it! There are plenty of delegates already milling around, I saw a lot of queues around the breakout sessions and also for the hands on labs.

As today was partner day, I already booked my sessions on the day they were released. I know how quickly these sessions fill, and I didn’t want the hassle of queuing up outside and hoping that I would get in. The first session was around what’s new in Virtual SAN. There have been a lot of press inches given to the hyper converged storage market in the last year, and I’ve really tried to blank them out. Now the FUD seems to have calmed down, it’s good to be able to take a dispassionate look at all the different offerings out there, as they all have something to give.

My first session was with Simon Todd and was titled VMware Virtual SAN Architecture Deep Dive for Partners. 

It was interesting to note the strong numbers of customer deploying VSAN. There was a mention of 3,000 globally, which isn’t bad for a product that you could argue has only just reached a major stage of maturity. There was the usual gratuitous customer logo slide, one of which was of interest to me. United Utilities deal with water related things in the north west, and they’re a major VSAN customer.

There were other technical notes, such as VSAN being an object based file system, not a distributed one. One customer has 14PB of storage over 64 nodes, and the limitation to further scaling out that cluster is a vSphere related one, rather than a VSAN related one.

One interesting topic of discussion was whether or not to use passthrough mode for the physical disks. What this boils down to is the amount of intelligence VSAN can gather from the disks if they are in passthrough mode. Basically, there can be a lot of ‘dialog’ between the disks and VSAN if there isn’t a controller in the way. I have set it up on IBM kit in our lab at work, and I had to set it to RAID0 as I couldn’t work out how to set it to passthrough. Looks like I’ll have to go back to that one! To be honest, I wasn’t getting the performance I expected, and that looks like it’s down to me.

VSAN under the covers seems a lot more complex than I thought, so I really need to have a good read of the docs before I go ahead and rebuild our labs.

There was also an interesting thread on troubleshooting. There are two fault types in VSAN – degraded and absent. Degraded state is when (for example) an SSD is wearing out, and while it will still work for a period of time, performance will inevitably suffer and the part will ultimately go bang. Absent state is where a temporary event has occured, with the expectation that this state will be recovered from quickly. Examples of this include a host (maintenance mode) or network connection down and this affects how the VSAN cluster behaves.

There is also now the ability to perform some proactive testing, to ensure that the environment is correctly configured and performance levels can be guaranteed. These steps include a ‘mock’ creation of virtual machines and a network multicast test. Other helpful troubleshooting items include the ability to blink the LED on a disk so you don’t swap out the wrong one!

The final note from this session was the availability of the VSAN assessment tool, which is a discovery tool run on customer site, typically for a week, that gathers existing storage metrics and provides sizoing recommendations and cost savings using VSAN. This can be requested via a partner, so in this case, Frontline!

The next session I went to was Power Play :What’s New With Virtual SAN and How To Be Successful Selling It. Bit of a mouthful I’ll agree, and as I’m not much of a sales or pre-sales guy, there wasn’t a massive amount of takeaway for me from this session, but Rory Choudhari took us through the current and projected revenues for the hyperconverged market, and they’re mind boggling.

This session delved into the value proposition of Virtual SAN, mainly in terms of costs (both capital and operational) and the fact that it’s simple to set up and get going with. He suggested it could live in harmony with the storage teams and their monolithic frames, I’m not so sure myself. Not from a tech standpoint, but from a political one. It’s going to be difficult in larger, more beauracratic environments.

One interesting note was Oregon State University saving 60% using Virtual SAN as compared to refreshing their dedicated storage platform. There are now nearly 800 VASN production customers in EMEA, and this number is growing weekly. Virtual SAN6.1 also brings with it support for Microsoft and Oracle RAC clustering. There is support for OpenStack, Docker and Photon and the product comes in two versions.

If you need an all flash VSAN and/or stretched clusters, you’ll need the Advanced version. For every other use case, Standard is just fine.

After all the VSAN content I decided to switch gears and attend an NSX session called  Disaster Recovery with NSX, SRM and vRO with Gilles Chekroun. Primarily this session seemed to concentrate on the features in the new NSX 6.2 release, namely the universal objects now available (distributed router, switch, firewall) which span datacentres and vCenters. With cross vCenter vMotion, VMware have really gone all out removing vCenter as the security or functionality boundary to using many of their products, and it’s opened a whole new path of opportunity, in my opinion.

There are currently 700 NSX customers globally, with 65 paying $1m or more in their deployments. This is not just licencing costs, but also for integration with third party products such as Palo Alto, for example. Release 6.2 has 20 new features and has the concept of primary and secondary sites. The primary site hosts an NSX Manager appliance and the controller cluster, and secondary sites host only an NSX Manager appliance (so no controller clusters). Each site is aware of things such as distributed firewall rules, so when a VM is moved from one site to another, the security settings arew preserved.

Locale IDs have also been added to provide the ability to ‘name’ a site and use the ID to direct routing traffic down specific paths, either locally on that site or via another site. This was the key takeway from the session that DRis typically slow, complex and expensive, with DR tests only being invoked annually. By providing network flexibility between sites and binding in SRM and vRO for automation, some of these issues go away.

In between times I sat the VCP-CMA exam for the second time. I sat the beta release of the exam and failed it, which was a bit of a surprise as I thought I’d done quite well. Anyway, this time I went through it, some of the questions from the beta were repeated and I answered most in the same way and this time passed easily with a 410/500. This gives me the distinction of now holding a full house of current VCPs – cloud, desktop, network and datacenter virtualisation. Once VMware Education sort out the cluster f**k that is the Advanced track, I hope to do the same at that level.

Finally I went to a quick talk called 10 Reasons Why VMware Virtual SAN Is The Best Hyperconverged Solution. Rather than go chapter and verse on each point I’ll list them below for your viewing pleasure:-

  1. VSAN is built directly into the hypervisor, giving data locality and lower latency
  2. Choice – you can pick your vendor of choice (HP, Dell, etc.) And either pick a validated, pre-built solution or ‘roll your own’ from a list of compatible controllers and hard drives from the VMware HCL
  3. Scale up or scale out, don’t pay for storage you don’t need (typically large SAN installations purchase all forecasted storage up front) and grow as you go by adding disks, SAS expanders and hosts up to 64 hosts
  4. Seamless integration with the existing VMware stack – vROps adapters already exist for management, integration with View is fully supported etc
  5. Get excellent performance using industry standard parts. No need to source specialised hardware to build a solution
  6. Do more with less – achieve excellent performance and capacity without having to buy a lot of hardware, licencing, support etc
  7. If you know vSphere, you knopw VSAN. Same management console, no new tricks or skills to learn with the default settings
  8. 2000 customers using VSAN in their production environment, 65% of whom use it for business critical applications. VSAN is also now third generation
  9. Fast moving road map – version 5.5 to 6.1 in just 18 months, much faster rate of innovation than most monolithic storage providers
  10. Future proof – engineered to work with technologies such as Docker etc

All in all a pretty productive day – four sessions and a new VCP for the collection, so I can’t complain. Also great to see and chat with friends and ex-colleagues who are also over here, which is yet another great reason to come to VMworld. It’s 10,000 people, but there’s still a strong sense of community.

02-07-15

Networking for VMware Administrators – Book Review

download

Much to my surprise, I bought “Networking for VMware Administrators” back in April 2014 and it has been on my “to do” list to read it since then. Regular readers will know of my recent scrapes and japes with NSX, including passing the VCP-NV exam so there was a nice dovetail with what I’ve been learning in this area and this book.

For those familiar with the VMware curcuit, Chris Wahl is a well known presenter and author and amongst other things regularly appears at VMworld and records Pluralsight videos, which I always like to use as a jump start to anything new I learn. As I’m not a networking guy, I thought I would try and start at the bottom, get a refresher on basic concepts and then move it forward to how that applies in the vSphere world. Steve Pantol is a new name to me, but the two seem to have a nice flow to how they write.

This book certainly hits the mark where that is concerned. Starting off very simply, the basic concepts of how networking evolved from the simplest idea to be where it is now takes you from the first rung on the ladder and conceptualises each new addition to networking designs, such as hubs, repeaters and switches. This then moves along to things such as VLANs and broadcast domains.

Physical networking is covered at a decent level of detail, taking into account the OSI model, and subtle but important differences between layers 2, 3 and above. I found the authors’ easy and humorous style of delivery very easy to follow and not feeling like a dry subject being rammed down your throat. Networking isn’t necessarily the most intriguing subject you’ll ever cover, but we’d be nothing without it’s essential plumbing to get us connected.   I read the book in three sittings, which is pretty good for me, as I’ve got the attention span of a gnat.

Part II of the book concentrates on virtual networking and switching, moving the focus towards vSphere and it’s networking options. Obviously this falls into two camps – standard and distributed vSwitches. There is also some content on Nexus V1000 switches, but I pretty much skipped that as I’ve never seen it and currently don’t really care about it. That being said, it’s good to know the section is there for me to refer back to if need be.

One aspect I really liked about the book overall was how choices and requirements fed into the design of the networking infrastructure, both from a physical and virtual viewpoint. Chris is a dual VCDX and it’s useful to get inside of his head and understand how to translate these sorts of issues and choices into an overall design. Especially useful if I ever get my finger out and actually submit a VCDX design!

Part III covers storage traffic on the network, namely iSCSI and NFS. I was a little surprised to see this type of content in the book, but enjoyed reading about it none the less. I suppose storage traffic falls into the cracks a little bit as it’s not “pure” VM networking, but it’s just as essential to get this part right when designing an overall solution. Bad storage == bad performance!

Again, a good emphasis on design constraints, assumptions and choices is put into this section, giving you a good steer on what should be considered when using storage protocols over the physical network (items such as dedicated, non routed VLANs, for example). One good tip I picked up was how to configure NFS to give you more NICs by using multiple exports on the NFS server and establishing separate links. As with all other sections, single points of failure are discussed and mitigated with different design choices.

Another good titbit I picked up was using traffic shaping to throttle vMotion traffic on 10Gbps Ethernet – I’d never before actually come across a good use case for traffic shaping, I’d assumed NIOC was always the way to go.

Finally section IV covers off all other “miscellaneous” networking concerns for your design and/or environment, this includes vMotion as discussed above and how to design around multiple NICs and/or connections, exploding a few myths along the way.

At 368 pages, it’s not War and Peace but also it’s not a 100 page pamphlet that skims over the important details. Like I said, I read it in around three chunks over a couple of days without it feeling like a chore. I think for anyone pursuing the VCDX route, this book is an absolute must. Not only does it help crystallise some concepts around physical and virtual networking, but there is excellent detail on how to consider your networking design and how to justify particular design decisions.

NSX is out of the scope of this book, but is such a huge topic in and of itself that I’m sure we’ll see a release on this in the not too distant future. This is a book that helps you understand networking from the ground up and how this relates to a virtual world.

That being said, it’s a highly recommended addition to your library of resources as it helps you have a meaningful conversation with networking teams, which as we all know is not the easiest thing in the world 😉

Remember if you have a VCP certification, you can buy this book from VMware Press with a 30% discount using the code you can obtain from the VCP portal. I also believe Chris donates all book profits to charity, so yet another excellent reason to add this to your collection. Other good stockists are also available!

11-06-15

VCP6-CMA Study Guide – Section 3: Create and Administer Cloud Networking

VCP6-CMA-sm-logo_120_108

Objective 3.1: Explain NSX Integration with vRealize Automation

Manage network services from within vRealize Automation

  • Network profiles are used to map networks in vRA to port groups in vSphere (for example)
  • Create a network profile from the vRealize Appliance, logged in as a fabric administrator
  • Go to Infrastructure -> Reservations -> Network profiles
  • Click New Network Profile and select the appropriate type (External, NAT, private, routed – all are created at time of provisioning except External which is a pre-existing vSphere port group)
  • Give the profile a name and configure the subnet mask (and optionally, DNS details and gateway)
  • Click IP Ranges tab and add a range of IP addresses for that profile to consume by using New Network Range button
  • Fill out a name and a start and end IP address for the range, click OK
  • A CSV file may also be used to define a large range of IP addresses

Configure NSX Integration

  • Prerequisites include an existing NSX Manager instance associated to a vCenter Server and a vSphere endpoint instance
  • Also credentials for the NSX Manager (Infrastructure -> Credentials -> New Credentials) and NSX plug-in into Orchestrator
  • Login to the vRealize Appliance as an IaaS administrator
  • Edit the vSphere endpoint in Infrastructure -> Endpoints
  • Select “Specify manager for network and security platform”
  • Add the IP address or DNS name of the NSX Manager appliance
  • Select the NSX Manager credential set previously added
  • Run a data collection from the Infrastructure -> Compute Resources section in vRealize Appliance (ensuring the network discovery is enabled)
  • Before you consume NSX services, you must run the Enable Security Policy Support for Overlapping Subnets Workflow in vRealize Orchestrator, using the NSX Manager endpoint previously used as the input parameter for the workflow.
  • After you run this workflow, the Distributed Firewall rules defined in the security policy are applied only on the vNICs of the security group members to which this security policy is applied

Configure IaaS for Network Integration

  • Configuration requires steps in this order:-
    • Configure the Orchestrator endpoint in IaaS
    • Create a vSphere instance integrated with NSX (see above)
    • Run the Enable Security Policy Support for Overlapping Subnets Workflow (see above)
    • Create a network profile (see above)
    • Add or amend an existing reservation, click on the Network tab
    • Select an external network in the Network Paths list
    • Select the transport zone, security group and routed gateway

Objective 3.2: Configure and Manage vRealize Automation Networking

Identify the available NSX for vSphere Edge network services

    • NSX Edge Services include:-
      • Dynamic Routing (Provides the necessary forwarding information between layer 2 broadcast domains, thereby allowing you to decrease layer 2 broadcast domains and improve network efficiency and scale. NSX extends this intelligence to where the workloads reside for doing East-West routing. This allows more direct virtual machine to virtual machine communication without the costly or timely need to extend hops. At the same time, NSX also provides North-South connectivity, thereby enabling tenants to access public networks.)
      • Firewall (Supported rules include IP 5-tuple configuration with IP and port ranges for stateful inspection for all protocols)
      • Network Address Translation (Separate controls for Source and Destination IP addresses, as well as port translation)
      • DHCP (Configuration of IP pools, gateways, DNS servers, and search domains)
      • Site-to-Site Virtual Private Network (VPN) (Uses standardized IPsec protocol settings to interoperate with all major VPN vendors)
      • L2 VPN (Provides the ability to stretch your L2 network)
      • SSL VPN-Plus (SSL VPN-Plus enables remote users to connect securely to private networks behind a NSX Edge gateway)
      • Load Balancing (Simple and dynamically configurable virtual IP addresses and server groups)
      • High Availability (High availability ensures an active NSX Edge on the network in case the primary NSX Edge virtual machine is unavailable)
      • Multi-Interface Edge

Configure DHCP/NAT/VPN/Load Balancer

  • Configuration of NSX is done from the vSphere Web Client
  • Uses a plugin under the Networking & Security button
  • Go to NSX Edges and create an Edge Gateway for the services
  • Provide CLI username and password for appliance
  • Enable SSH and HA if required
  • Pick datacenter, appliance size (compact, large, X-Large, Quad-large)
  • Choose cluster and datastore for Edge appliance deployment
  • Configure NIC and which VDS you want to attach the appliance to
  • Configure IP addresses and subnet, MTU size (1600 for VXLAN, remember)
  • Services are configured by double clicking on the Edge appliance and going to the Manage tab

Sub-allocate IP Pools

  • IP Pools are created and edited under the NSX Edge Gateway object in the vSphere Web Client. Look under the Manage tab, then click Pools and the add button. Configure the pool as appropriate

Add static IP addresses

  • Static IP addresses are created under the Edge Gateway Manage tab, the DHCP and bindings. Click the add button and add VM or MAC binding as needed.
  • Interface, VM Name, VM vNIC interface, Host name and IP address are required fields.

Configure syslog

  • The syslog server is configured by logging into the NSX Manager appliance management interface, Manage Appliance Settings button and fill out the Syslog server under General settings.
  • IP address, port number and protocol (TCP/UDP) are required

06-05-15

The Open Road

open-road

I know I haven’t blogged for a while, but you’ll probably see now why. I recently left ANS to join a consulting and services company called Frontline Consultancy, who are another VMware partner in the North West. I realise I wasn’t at ANS too long, but to be honest, this new role was an opportunity not to be missed.

I wasn’t on the lookout for a new position, but it was nice to be spotted and once I found out what the role was about, I couldn’t say no. This blog has been EUC centric for quite a long time, and while there will still be some EUC content, I will be moving into a more general VMware space in terms of content. I’m headed back into the data centre and adding vCloud technologies to my bow (or vRealise, or whatever it’s called today!).

Obviously I’ve been doing DCV activities for some years, but cloud was the major missing piece of my personal skills jigsaw. Now I have the chance to close this gap and get involved with some automation projects that take me out of my comfort zone and force me to adapt once again. Ultimately, I do believe variety is the spice of life and as the picture above would suggest, the road is open for me and the chances appear to be limitless.

As I left ANS, they were recently awarded the Converged Infrastructure gong at the NetApp Partner awards, so they continue to go from strength to strength and I wish them well. As for my new role, it’s a good chance to for me to get stuck into a really high profile projects and become a better and more rounded techie.

One more thing, I have accepted the invitation to speak at next month’s North West England UK VMUG where I will be discussing the new VMware certification roadmaps and the recent changes made. Please do come along and give it a whirl, I believe we also have a vRockstar there in the shape of Duncan Epping. An event not to be missed! More details and registration are available at the event page. We’re back at Rosylee in Manchester, with the ubiquitous (and free) vBeers available afterwards.

Hope to see you there!

26-03-15

Upgrading The Home Lab Part III : Upgrading VMware Tools and Virtual Hardware

We’ve arrived at the final part of our odyssey (a small odyssey in my case, but an odyssey none the less!) in our upgrade to vSphere 6.0. We’ve upgraded vCenter (relatively trouble free), ESXi (not so much, but that was down to my Jurassic era hardware) and now we have the small matter of the VMs left, to upgrade VMware Tools and virtual hardware to the latest versions.

This might seem like the easiest task of the lot, but actually in my experience this is the hardest part. Not so much from a technical level, but from the perspective of there being large numbers of VMs to touch, and of course in times of Change Management, getting agreement to down VMs to upgrade their virtual hardware can sometimes feel like rutting stags in a field. Although from vSphere 5.1 onwards, a Windows reboot for an upgrade of VMware Tools was eliminated, we still need to power off VMs in order to upgrade their virtual hardware.

29fc1f37d00bf7a74decf694dce0559225f242e6 “Barry, let me upgrade the virtual hardware on your Exchange Server!..” “…No! Bugger off, Maurice! I can’t have 5 minutes downtime!..”

Thankfully, VUM can come to our rescue again. When it’s installed, it creates some default patch baselines. Two of which include baselines for upgrading virtual hardware and VMware Tools. You can see these by clicking on the VMware Update Manager button in the Home view in the vSphere Client. You need to click on the “Baselines and Groups” tab and then on the “VMs/VAs” button. You should see the following in your VUM screen:-

vum-vm-baselines

There is also an upgrade path for virtual appliances you can see at the bottom, we’re not going to cover that here as usually appliances are in the minority. VMs are what we’re looking at here. In order get VUM to bring our VMs up to date, we need to create a couple of Baseline Groups, or we can just use a single group if we want to consolidate both upgrades into a single operation, which is what I’ll be doing. We can do this from the same screen as above, in the right hand pane. Click on the “Create” button to start the Baseline Group as shown below:-

create-baseline-group

This starts the Baseline Group creation wizard, which only really has a couple of steps to set up, nothing too tricky. Give the Baseline Group a name, as below. And no, don’t use one of the Bee Gees like I did with the stag picture:-

baseline-1

Click Next and as we’re only upgrading VMware Tools and virtual hardware, we’re going to leave virtual appliances alone. We are going to create VM upgrades, so tick the radio buttons next to the following groups:-

  • VM Hardware Upgrade to match host (predefined)
  • VMware Tools Upgrade to match host (predefined)

This is shown below:-

baseline-types

Click Next..review the settings and click Finish and you’ll see the following screen:-

baseline-group-complete

So now we have our baseline groups created, we now need some VMs to attach them to. As I’ve said countless times before, this is a test environment, so I don’t suffer from the same constraints as a production system. That’s another way of saying “if something explodes, I don’t care”, but that being said, I do want to stage these updates to make sure everything works as I expect before I push the baseline group to a wider audience.

I am not going to update any virtual appliances as I mentioned previously, and I have no VMs right now that are Linux based. Rather than pushing out the baseline to all Windows VMs, I’m going to stage them by folder. First up is my seldom used Windows Cluster folder. This has two Windows Server 2012 R2 nodes and an iSCSI target also running Windows 2012 R2. As I hardly use this cluster, it spends most of it’s life powered off, meaning it’s a good place to test my rolling VM updates.

So to start with, if you haven’t already, create a folder and move the VMs you want to update into this folder (hint: you’ll need to be in the “VMs and Templates” view in vSphere Client to do this). Once you’ve done this, you can add the baseline group to the folder by clicking the Update Manager tab and clicking the Attach.. button. You’ll then see the dialog below:-

attach-baseline-to-folder

As you can see, I already ticked the box to add the Baseline Group to the folder. The sharper eyed readers amongst others will notice I could have done this without creating a baseline group first, but I think my way is neater 😉

Click Attach and then you will need to perform a Scan.. just as we did with the hosts. In fact, it’s exactly the same process. Remember at this stage, we don’t care about virtual appliance updates, so make sure you untick that box and tick the other boxes for VMware Tools and virtual hardware, as below:-

confirm-tools-scan

The scan results are in, and lo and behold I’m not compliant:-

vm-scan

In which case, I need to hit the Remediate.. button to apply both sets of upgrades, just like we did with the hosts. This starts an upgrade wizard, as shown below:-

remediate-vm-1

On clicking Next.. the next step is to schedule when we want the upgrades to occur. Like I said, these boxes are my guinea pigs as they are hardly ever powered on, so I can go ahead and do it immediately. In the production world, you’d probably have to do this out of hours or whenever your maintenance windows are:-

remediate-vm-2

Give the task a name and description as shown above, and decide when you want the process to run. The scheduled intervals are applied via powered on machines, powered off machines and suspended virtual machines. By default, Immediately is set for all cases. Take care here!

One really useful feature of using VUM to upgrade VMs is the ability to create snapshots ahead of the actual upgrade processes. This is very handy on the off chance that something goes badly pear shaped. There’s no reason it should, but it’s always nice to have a safety net, isn’t it? And you are creating full offline backups, aren’t you?

remediate-vm-3

So as you can see above, I’m keeping the snapshot for 24 hours (default is 18, for some reason). You can keep them forever if you like, but if there are a lot of VMs to be upgraded, this could swallow a lot of expensive storage in a busy environment very quickly. I just want to make sure the VM boots and reports back in as up to date once the process is complete. 24 hours is plenty of time for me to validate the update hasn’t eaten my VM. As these particular VMs are already powered off, no need for me to select Take a snapshot of the memory for the virtual machine. This requires a running instance of VMware Tools and can add a lot of time to the process, so use sparingly.

Time for one last sanity check and then hit Finish if you’re happy:-

remediate-vm-4

You can then monitor the upgrade task in the tasks pane at the bottom of the screen, as below (click to expand):-

remediate-vm-5

Once the upgrade task completes (and this could take a while, so go and make a coffee or something), you should see a fully compliant bunch of VMs. If you don’t, you can use the Tasks/Events window (Events mainly) to help troubleshoot what went wrong. The law of averages says that a couple of VMs out of dozens will need some minor hand holding. To get through them all without issues is pretty much unheard of, so don’t worry. As you can see below from the Events window, the upgrade process is ongoing:-

vum-progress

And then after a little while of VUM whirring away in the background, skidoosh! We have 100% compliance!

remediate-vms-100-percent

Don’t believe me? Here’s what one of the VMs says..

vm-status

We’re on version 11 virtual hardware (ESXi 6.0 compatible) and VMware Tools are current. All done by VUM in the background. Multiply that by a few dozen VMs and you’ve got a nice time saver there! I also wanted to show that the pre-upgrade snapshot is available for us, on the off chance something went septic:-

vm-snapshot

As you can see, VUM even puts in a useful description so we know what the snapshot is, when it was created and when it will be deleted (if applicable).

Conclusion

Upgrading VMs can often be the trickiest part of the upgrade process as there can be hundreds or thousands of objects to be updated. However, VUM can make this process pretty painless by automating the upgrades and scheduling them for a time that suits you. Don’t be like Barry and Maurice at the top of the article – get a maintenance window with the VM owner and get VUM to do all of the heavy lifting for you.

 

25-03-15

Upgrading the home lab Part II : ESXi hosts

In Part I of the “Upgrading the home lab” series, we migrated/upgraded the vCenter appliance from version 5.5 to 6.0. That all seemed to go pretty well, so the next major step on the road to vSphere 6.0 is to upgrade the ESXi hosts in the environment to ESXi 6.0. Just before we get to that, we’ve actually missed a step out. Once vCenter has been upgraded to version 6.0, you should take a few minutes to upgrade VMware Update Manager (VUM) to version 6.0 too. In my case I hadn’t got around to building a 5.5 VUM server, so I just built one out from scratch with the vSphere 6.0 installer DVD. I just did a simple install and used the SQL Server 2012 Express version for the database, as I’m just managing a single host. For 5 or more hosts, you should go and get the “full fat” SQL Server.

What’s new with VUM?

Not really a lot as far as I can see. It still requires a Windows Server (minimum 2008, but 2012 R2 should be your aim these days), still requires a SQL database (see above) and still requires the vSphere Client (not the Web Client) to perform any kind of meaningful management. In that respect, it doesn’t look much different than it did in 5.5 days. You can read what’s new in the VUM 6.0 documentation, but it seems to be more database support than anything to get excited about (like baking it into the appliance, for example).

Upgrading the ESXi host(s)

To upgrade your hosts, there are a couple of different ways you can do it. You can boot from the DVD (or remotely attach an ISO image if you have an iLO/DRAC card etc.) and perform an in-place upgrade, you can use VUM to upgrade your hosts, or you can boot from DVD/ISO and perform a fresh installation. It depends what you want to achieve in the process, obviously you want a quick and supported way of getting your hosts up to date, and VUM is VMware’s recommended method.

However, in most enterprise environments, ESXi hosts are commodity items – by this I mean all VM data (and even ISOs) are stored on shared datastores on SAN/NAS etc. In this case, you can achieve a “clean slate” installation by using the installation DVD to perform a fresh installation with the original addressing information. Consider the use of host profiles to “backup” the host configuration before you start (requires Enterprise Plus licencing).

You can also use scripted upgrades, using Auto Deploy or the esxcli command, see here for further information on supported methods. I’m lazy, so I’m using VUM.

Using VUM to upgrade your hosts

As noted above, VUM is the recommended method of upgrading hosts to the latest version of ESXi. In terms of supported prior versions of ESXi, if you’re version 5.x or above, you’re pretty much in clover. Anything older than that is basically a fresh new installation. That’s not all bad, depending of course on how many hosts you have to get through. Remember to check the VMware HCL to ensure your host hardware is supported with ESXi 6.0 and if you can, obtain the custom vendor ISO for ESXi for the best level of driver support and functionality. At the time of writing however, I was only able to find the HP version of the custom ISO (as you can see below), so I will have to use the GA ISO to upgrade my PowerEdge. Hurry up, Dell!

esxi-downloads

Once you have obtained the ESXi 6.0 ISO, ensure your VUM plugin in the vSphere Client is installed and enabled (and one step I haven’t specifically called out is to ensure you upgrade your vSphere Client to version 6.0 before you start this part. Reports of it’s demise have been somewhat premature!).  You can check this by going to the Plugins menu and selecting Manage Plugins.., you should see something similar to the following:-

plugin-manager

If you have a prior version of the VUM plugin installed or you don’t have the plugin installed, you will have to select the “Download and install” option. This runs a brief installer and does not require a reboot nor a restart of the vSphere Client. If the installation has been successful, you’ll see the plugin enabled in the Plugin Manager and you’ll also have a button on the home screen and an extra tab on the host view.

The next step is to upload our ESXi ISO into the VUM repository and create a patch baseline. To do this, you need to go to the Home view in the vSphere Client and then click on the VUM button in the Solutions and Applications section, as shown below:-

vum-home

This button takes you into the VUM management view and from here we need the ESXi Images tab, as shown below:-

esxi-images

And then from there, click on “Import ESXi Image” as shown above. Browse to and select the ESXi 6.0 ISO you downloaded, click next to start the import process and you should see the following progress dialog. This only usually takes a couple of minutes or so.

iso-upload

If the import has been successful, you’ll see the following dialog:-

import-successful

We now need to create an upgrade patch baseline from this ISO so we can add it to our hosts to be upgraded. Leave the “create a baseline” option ticked and give it a meaningful name, as shown below:-

baseline-name

And click “Finish”. All being well, you should now have both the ISO imported and the baseline created, as shown below:-

baseline-iso

So now we have imported our ISO and we have created a baseline. Now we need to associate this baseline with an object to be upgraded. We basically have three choices here – we can apply the baseline at datacenter level, cluster level or we can apply the baseline at individual host level. I’m going to go for the first option, just so I can call out some differences between the options. To apply the baseline to the datacenter object, select the datacenter object in the vSphere Client, select the Update Manager tab and click the Attach.. button on the far right, as shown below:-

attach-baseline

As you can see, my datacenter has no baselines already attached. In the “Attach baseline or Group” dialog, you should see the upgrade baseline we created earlier. Tick the box and tick “Attach” as shown below:-

attach-baseline-group

Once you have attached the patch baseline to the datacenter object, the view in Update Manager should change. You will see the hosts added and a 0% compliance report. This is because we haven’t yet run a scan against the host to check what version of ESXi already exists and if the host is compatible with the ESXi 6.0 upgrade. Next, select your hosts and click the “Scan..” button in the top right.

vum-scan

In our case we just want to scan against upgrade baselines, so be sure to tick this box in the “Confirm Scan” dialog:-

confirm-scan

Click the “Scan” button and VUM will go off and query each host in turn for their compliance against the ESXi 6.0 upgrade baseline we created. This should only take a couple of minutes per host. Once the scan is complete, you should see new information in the VUM tab. In my case, my host as come back as “Incompatible”, which doesn’t surprise me in the least as this host hardware is prehistoric by any measure. However, I can still force the upgrade to run if I know the installer will complete successfully. This isn’t strictly supported by VMware, but all this basically means is that only current generations of servers are tested by VMware and their partners for HCL purposes. To recertify every piece of server hardware for each new release of ESXi does not make sense. This does not however mean that your server can’t run ESXi 6.0, I suggest you test it on some development kit first before moving forward. In my lab, I don’t care!

vum-incompatible

As you can see in the above graphic, my host is older than Bruce Forsyth and as such comes back as non compliant in VUM. No surprises there. In order to force this upgrade through, I can hit the “Remediate” button to force the upgrade to start. I have seen in the field some HCL certified kit come back as incomplete, so sometimes you do need to know how to do this to get the upgrade done. This in turn starts a 6 step wizard to push the upgrade down to the host via VUM. First up, we need to select which hosts and which baseline to use, as below:-

remediate-1

Then we thoroughly read and agree to the software EULA:-

remediate-2

The next step is “signing the death warrant”. If this goes toes up, that’s down to you! Check the box to ignore warnings and in my case, hope my offline backups are good!

remediate-3

Then we give the task a name and description (you can call it anything you like, really) and schedule when this upgrade should be done. I’m going to do it immediately, because I just can’t wait for ESXi 6.0 goodness!

remediate-4

In the final configuration step, I need to tell VUM what to do if there are running VMs on the host to be upgraded. Normally you wouldn’t change anything here as really you should already have your host in maintenance mode before you target it with the upgrade. As my vCenter appliance is on the host to be upgraded, I need to be slightly more creative and get vCenter to power VMs off.

remediate-5

Then one last sanity check before we hit the chicken switch…

remediate-6

And off we went. However, big problems lay ahead. VUM spat out my upgrade saying the CPU in my host was not compatible. OK, fair enough. I did buy it from Fred Flintstone! What I did instead was to burn the ESXi 6.0 ISO to CD and boot it off the physical DVD drive in the host. This way I can basically tell the installer I don’t care about compatibility and support issues, I’m going to bear the risk of it all turning to toast.

First off, I booted from the CD and as the files were copying (black screen with yellow thin progress bar at the top), I got the error “Error loading /s.v00 Fatal error: 6 (Buffer too small)” and the whole thing just stopped. I didn’t get it – the MD5 matched the VMware download site! I downloaded the ISO again, but this time I performed a “direct” download rather than using the Download Manager. The MD5 matched again, I burned a new CD and this time it all worked just fine.

Even though the installer complained bitterly about the host CPU, CPU virtualisation modes and a PCI device not being supported, it all seems to work just fine. The host booted and my vCenter appliance auto started as usual. For completeness, time to go back into VUM and validate my upgrade. To do this, select the host, go to the Update Manager tab and select “Scan” again, as previously. This should take just a minute or so, and then we get what we were hoping for. Green!

vum-green

So now we have vCenter at 6.0 and ESXi at 6.0. Not without a few niggles, but that’s just a consequence of using such old hardware. The moral of the story for me is that it’s high time I gave my boxen an overhaul. Finally, as a last piece of housekeeping, I’m going to validate the status of my vSphere Client plug-ins :-

plugin-error

It seems everyone is happy except Mr Auto Deploy. I don’t use Auto Deploy in my lab, but red errors kind of piss me off anyway. I performed a quick Google and found a blog post by Kyle Gleed that tells you how to fix this. You simply start the Auto Deploy service on the appliance as it’s disabled by default. God bless the internet. However, Kyle’s instructions reference a management interface to the appliance which is no longer used in 6.0. In order to configure appliance based services, you must login to the Web Client as an administrator and enable it from there.

In the main Web Client home screen, click on the “Administration” button on the left and navigate down to Deployment/System Configuration as shown below:-

deploy-sysconfig

Then click “Services”..

services

And then right click on “Auto Deploy”.. and select “Edit Startup Type”..

rightclick-autodeploy

 

Select the Startup Type, depending on how you want the service to start on appliance boot – I’m choosing “Automatic“and click OK..

startup-type

And then manually start the service by right clicking again on “Auto Deploy” and selecting “Start”..

autodeploy-start

I then go back into Plugin Manager, enable the Auto Deploy plugin (accepting any certificate warnings) and we should be free of errors!

plugin-manager-fixed

As a side effect, we also have a nice button added to the home screen:-

autodeploy-button

Conclusion

So there we go, we now have an upgraded vCenter Server and ESXi host. I wouldn’t advise forcing the upgrade through the way I did unless you have nothing to lose – i.e. it’s a test lab or something non production. If the host hardware you are installing to is HCL listed, you should whizz through a VUM focused upgrade just fine.

If you get any odd errors in the Web Client, try deleting your browser cache and also the Flash Player cache from Control Panel. There could be something caught up in there from 5.x days.

Next stop is upgrades to the virtual machines – virtual hardware and VMware Tools. That is to come in Part III !

 

23-03-15

Upgrading the home lab Part I : vCenter Server

vSphere 6.0 has finally shipped, so I decided to take the plunge and upgrade the home lab to vSphere 6.0. In the next couple of posts, I’ll outline the steps required to perform the upgrade, plus any issues I encountered along the way. I think most people know that most articles I write are focused around VDI, so let me express this straight out of the gate. vSphere 6.0 does not support Horizon View until release 6.1. So basically, don’t upgrade any environments to vSphere 6.0 where View is in use, stuff will probably break. When is View 6.1 out? I don’t have a clue right now, but seeing as the release notes have been posted up, I can’t imagine it’s too far away.

If you’re project planning and you need to have certified upgrade paths (I know some project documentation requires this in some companies), the current (at the time of writing) interoperability matrix result is shown below:-

Compatibility Matrix

 

So my home lab is a very simple affair indeed. It comprises a single host which is a Dell PowerEdge 1435SC with 32GB RAM and two Opteron processors. Old hat I know, but it gets the job done. For those wondering how I deal with power management on such old kit, it’s simple. I turn the bugger off when I’m not using it! As I’m often on the road, I don’t see a lot of value having the beast humming away in the attic when I’m not around to use it.

Anyway, that aside, it’s currently on ESXi 5.5 U2 and runs the vCenter Server Appliance. I chose the appliance because it’s quick and simple, and I don’t have to faff around with Windows licenses. I know Linux quite well, so I don’t have any fear knocking around on the command line. In vSphere 6.0, the back end architecture of vCenter changes somewhat. If you recall, in vCenter 4.x and 5.0, everything was pretty much just baked into one installer. In vSphere 5.1 and 5.5, Single Sign On was broken out (and made good in 5.5!), as was the Inventory Service to provide a more modular environment should you wish to break things out a little for scalability and to mitigate the “all eggs in one basket” factor.

Further to that, vCenter 6.0 now has the concept of the “Platform Controller”.  Put simply, the Platform Controller are infrastructure or access services such as SSO, License Service, Lookup Service and the Certificate Authority. vCenter Server is basically everything else, so Inventory Service, PostgreSQL, Web Client, Dump Collector, et al. For my simple purposes, I’m just going to upgrade from my current vCenter 5.5 appliance to the 6.0 appliance, I don’t really need to start making the design overly complex. In fact, because it is just a lab environment, I’m not massively bothered if the upgrade process blows up, I can just rebuild from scratch. Obviously in a live environment, I’d be slightly more circumspect!

One important caveat to note is in the VMware documentation:-

You cannot switch the models after deployment, which means that after you deploy vCenter Server with an embedded Platform Services Controller, you cannot switch to vCenter Server with an external Platform Services Controller, and the reverse.

For full information of the pros and cons of either method, please refer to the product documentation. I’m not going to go into that level of detail here. What is reassuring for me with my one host and a dozen VMs is the following comment in the documentation:-

vCenter Server with an embedded Platform Services Controller is suitable for most environments.

Cool, I feel kind of validated now. I couldn’t see at first blush how the sizing of your environment effects your design decision, I suspect it’s more to do with geographical constraints, multiple vCenters and other VMware products that integrate with it, such as vRealise Automation. More on that in the future once I understand it better!

The Appliance Upgrade Process

The process of upgrading your vCenter appliance from 5.x to 6.0 is actually more of a migration than an upgrade. In essence what you’re doing is spinning up a new appliance as 6.0 with temporary IP address information, using SSH to copy over the database from the source 5.x appliance (and historical data if you so choose), changing the target appliance’s IP address to the source’s address and then dropping the source 5.x appliance.

Meeting Prerequisites

As you might expect, there are several prerequisites to be met before actually copying over any files or making any changes. First and foremost – have a backup and no, a snapshot is not a backup! By all means take a snapshot of your vCenter Server prior to starting the process, but have a block based backup too, whether that’s via Veeam or any other backup method. Don’t rely on snapshots. If you do, you’re upgrade deserves to fail!

Again the product documentation is the best place to refer to, as I’m sure over time these will change as experiences come back from the field when the product is being deployed. Once pre-requisites have been met, we’re hot to trot and ready to install our vCenter Server 6.0 appliance.

Download the appliance ISO file from MyVMware (at the time of writing this is VMware-VCSA-all-6.0.0-2562643.iso) and you may notice from the off we’re not downloading an OVA/OVF as we did previously. This is because the upgrade method is slightly different. Instead we’re going to take the ISO and mount it locally on our Windows machine (if you’re on Windows 8.1, you can right click and select Mount to mount the ISO to your DVD drive) as shown below:-

mount-vcsa

 

Alternatively, you can right click and extract with 7-Zip (for example) and create a source directory of files. However, for my purposes, I’m going to keep it simple and mount the ISO in Windows. Once mounted, we need to navigate to the DVD drive and go to the \vcsa folder. In here is the installer for the Client Integration Plugin, which we will need for this process. As a good habit, don’t forget to right click the installer and select “Run as Administrator”, as shown below:-

client-runasadmin

You’ll need to close any running browsers as plugins are installed, and then step through the simple installer, which should take just a minute or so. Once this install is complete, no reboot should be required and we can jump back into the root of the installation DVD and run the main installer vcsa-setup.html. I ran this with IE, I don’t know how well it works with other browsers. You will need the browser plugin access to start the installer, so click “Allow” (if you’re using IE):-

allow_installer

You should then be greeted with the option to install or upgrade. If you don’t see this screen, go back and check your client integration plugin installation (a reboot may help):-

install

Time for another quick sanity check at this point that the pre-requisites have truly been met. In my case I’m running a 5.5 version of the virtual appliance, as shown below:-

old-vc

 

so I’m good to go:-

sanity-check

Now to the meat and potatoes of the installer itself, and a nine part wizard now has to be negotiated in order to kick the upgrade process off. First up is the usual EULA. I read this thoroughly (twice!) and consider myself happy with it and click “I accept..” and click Next:-

part1-eula

Next I need to input details of the ESXi host to which I’d like to push out the new virtual appliance. Note the caveats listed at the bottom of this step – disable lockdown or maintenance mode and if you’re using a VDS, initial deployment of the appliance must be to an ephemeral port group. Click Next:-

part2-esxi

Accept the certificate warning by clicking Yes in order to continue:-

part2-cert

Your host credentials will then be validated as shown below:-

part2-creds

Step 3 is setting up the appliance name and SSH options. I’m calling mine the highly imaginative and original name of “vCenter” and I’m turning SSH on because it’s a lab and I’m not massively fussed about security. In the real world, leave this disabled unless you specifically need it. Click Next to proceed.

part3-vm

Step 4 is configuring the source information for the migration, so your existing vCenter Server. In this screen you need to enter IP/DNS details of the current appliance, version number, your administrator@vsphere.local SSO password (and if you’re a buffoon like me and forgot what the password was, you can reset it using this KB!), the root password for the appliance and host information along similar lines. You can optionally choose to migrate performance and other historical data. I’m going to enable this option, and I can’t think of any great reason in the real world why you wouldn’t do this (unless your database is mah-oosive). Before you proceed, check the caveats at the bottom of the page – that lockdown mode and maintenance mode is disabled and the source appliance’s DRS setting is disabled so it doesn’t move off that source host part way through. Click Next:-

part4-source

Once the credentials have been accepted, take care to check the information presented to you on what happens post upgrade. The new appliance will inherit the old appliance’s IP address. Also make sure no firewalls prevent SSH access between both appliances as this is the transfer method used during the migration/upgrade. The installer will start the SSH process will be started on the source appliance if it is not already running. Click Yes:-

part4-warning

In step 5, we have the choice of choosing what size appliance we’d like. This is pretty similar to the Windows based installation and helps determine JVM heap sizes are set appropriately, amongst other things. I feel quite inadequate choosing the “Tiny” option, but it is what it is and I have no requirement to go any bigger than that. Size clearly is everything in this case. Make your selection as appropriate and click Next:-

step5-size

Step 6 is choosing which datastore you’d like the appliance to go on. I’m going to choose the one with the most available space and I’m also going to thin provision it. This appliance won’t be doing a great deal in my lab and the datastore isn’t likely to fill up quickly, so I have no real need to thick provision here. Click Next:-

step6-datastore

Step 7 is the creation of some temporary network details for the new appliance as this is a migration really as opposed to an in place upgrade. In this step, we should pick the same port group as the source appliance and use the same subnet as well if possible, especially if the data migration is going to be large. My database is small and it’s a single host, so speed shouldn’t be an issue. Fill out the details appropriate to your environment and click Next:-

step7-network

Step 8 is the “Ready to complete” or final sanity check. Review the information presented and check for typos, wrong IP addresses, what size appliance you need (sometimes “Tiny” just ain’t enough!) and when you’re happy, click Finish:-

step8-ready

All being well, the process should start by initiating the appliance deployment to your ESXi host as below:-

Deploy

You can also monitor what is going on by connecting to the ESXi host with your vSphere Client and clicking on the “Events” tab of your target appliance, you should see something similar to below:-

transfer

And some while later, I was greeted with the following screen. The process took around an hour, but bear in mind this is a “tiny” and simple deployment. Larger enterprises should budget a lot more time for this process when migrating between appliances:-

finished

I double checked my “old” appliance has been powered off, and the new appliance is up with the original appliance’s identity:-

appliance-screen

Once the install is completed, you may like to perform some housekeeping before you put the appliance into production. By default, my appliance came up with the DNS name localhostI don’t really want that, so I quickly logged into the appliance console to change it. Something you may notice that’s new is that the vCenter appliance has now been set to behave just like an ESXi host, so once you press F2 to customise the system and entered the root password, the menu structure should be something that’s pretty familiar:-

appliance-menu

I like that VMware use a different colour scheme on the appliance to save any confusion with connecting with an ESXi host. Even though you can see it’s the appliance at the bottom of the screen, with many screens open it may help prevent costly mistakes! To go back to the original housekeeping, go into Configure Management Network and then DNS Configuration. Input the appropriate values for your appliance, as shown below:-

new-dns

I also like to disable IPv6, though there is a mixed bag of opinion on this. I say if you don’t use it, don’t enable it. However, this is a subjective thing and purely optional. To disable IPv6, go into Configure Management Network and then into IPv6 Configuration. To disable it, hit the space bar to uncheck the box as shown below and restart your appliance.

disable-ipv6

Once rebooted, you can see we’re up to vCenter 6.0!

about-vsphere6

Post upgrade issues

I have only really come across two issues so far – firstly I got an “Error #1009” which I cleared by deleting cookies etc from my web browser (and also upgrading the Flash Player in Firefox to the latest version).

As you can also see from the above screen shot, I was having issues with the Client Integration Plug-In. It was definitely installed from when we started the migration process, and all three browsers I had reported the issue (IE11, Firefox, Chrome) so I uninstalled the plug-in from Add/Remove Programs, rebooted, downloaded the plug-in again from the Web Client login page, installed it and as you can see below, all was good:-

integation-enabled

Conclusion

In conclusion, I’d say well done to VMware for streamlining the upgrade process for the vCenter appliance. Yes, it has a couple of quirks and yes you should ensure all pre-reqs are met, but by and large I was pretty impressed with the whole process. Next up, my ESXi host….!