12-10-15

VMworld Europe Day One

Today saw the start of VMworld Europe in Barcelona, with today being primarily for partners and TAM customers (usually some of the bigger end users). However, that doesn’t mean that the place is quiet, far from it! There are plenty of delegates already milling around, I saw a lot of queues around the breakout sessions and also for the hands on labs.

As today was partner day, I already booked my sessions on the day they were released. I know how quickly these sessions fill, and I didn’t want the hassle of queuing up outside and hoping that I would get in. The first session was around what’s new in Virtual SAN. There have been a lot of press inches given to the hyper converged storage market in the last year, and I’ve really tried to blank them out. Now the FUD seems to have calmed down, it’s good to be able to take a dispassionate look at all the different offerings out there, as they all have something to give.

My first session was with Simon Todd and was titled VMware Virtual SAN Architecture Deep Dive for Partners. 

It was interesting to note the strong numbers of customer deploying VSAN. There was a mention of 3,000 globally, which isn’t bad for a product that you could argue has only just reached a major stage of maturity. There was the usual gratuitous customer logo slide, one of which was of interest to me. United Utilities deal with water related things in the north west, and they’re a major VSAN customer.

There were other technical notes, such as VSAN being an object based file system, not a distributed one. One customer has 14PB of storage over 64 nodes, and the limitation to further scaling out that cluster is a vSphere related one, rather than a VSAN related one.

One interesting topic of discussion was whether or not to use passthrough mode for the physical disks. What this boils down to is the amount of intelligence VSAN can gather from the disks if they are in passthrough mode. Basically, there can be a lot of ‘dialog’ between the disks and VSAN if there isn’t a controller in the way. I have set it up on IBM kit in our lab at work, and I had to set it to RAID0 as I couldn’t work out how to set it to passthrough. Looks like I’ll have to go back to that one! To be honest, I wasn’t getting the performance I expected, and that looks like it’s down to me.

VSAN under the covers seems a lot more complex than I thought, so I really need to have a good read of the docs before I go ahead and rebuild our labs.

There was also an interesting thread on troubleshooting. There are two fault types in VSAN – degraded and absent. Degraded state is when (for example) an SSD is wearing out, and while it will still work for a period of time, performance will inevitably suffer and the part will ultimately go bang. Absent state is where a temporary event has occured, with the expectation that this state will be recovered from quickly. Examples of this include a host (maintenance mode) or network connection down and this affects how the VSAN cluster behaves.

There is also now the ability to perform some proactive testing, to ensure that the environment is correctly configured and performance levels can be guaranteed. These steps include a ‘mock’ creation of virtual machines and a network multicast test. Other helpful troubleshooting items include the ability to blink the LED on a disk so you don’t swap out the wrong one!

The final note from this session was the availability of the VSAN assessment tool, which is a discovery tool run on customer site, typically for a week, that gathers existing storage metrics and provides sizoing recommendations and cost savings using VSAN. This can be requested via a partner, so in this case, Frontline!

The next session I went to was Power Play :What’s New With Virtual SAN and How To Be Successful Selling It. Bit of a mouthful I’ll agree, and as I’m not much of a sales or pre-sales guy, there wasn’t a massive amount of takeaway for me from this session, but Rory Choudhari took us through the current and projected revenues for the hyperconverged market, and they’re mind boggling.

This session delved into the value proposition of Virtual SAN, mainly in terms of costs (both capital and operational) and the fact that it’s simple to set up and get going with. He suggested it could live in harmony with the storage teams and their monolithic frames, I’m not so sure myself. Not from a tech standpoint, but from a political one. It’s going to be difficult in larger, more beauracratic environments.

One interesting note was Oregon State University saving 60% using Virtual SAN as compared to refreshing their dedicated storage platform. There are now nearly 800 VASN production customers in EMEA, and this number is growing weekly. Virtual SAN6.1 also brings with it support for Microsoft and Oracle RAC clustering. There is support for OpenStack, Docker and Photon and the product comes in two versions.

If you need an all flash VSAN and/or stretched clusters, you’ll need the Advanced version. For every other use case, Standard is just fine.

After all the VSAN content I decided to switch gears and attend an NSX session called  Disaster Recovery with NSX, SRM and vRO with Gilles Chekroun. Primarily this session seemed to concentrate on the features in the new NSX 6.2 release, namely the universal objects now available (distributed router, switch, firewall) which span datacentres and vCenters. With cross vCenter vMotion, VMware have really gone all out removing vCenter as the security or functionality boundary to using many of their products, and it’s opened a whole new path of opportunity, in my opinion.

There are currently 700 NSX customers globally, with 65 paying $1m or more in their deployments. This is not just licencing costs, but also for integration with third party products such as Palo Alto, for example. Release 6.2 has 20 new features and has the concept of primary and secondary sites. The primary site hosts an NSX Manager appliance and the controller cluster, and secondary sites host only an NSX Manager appliance (so no controller clusters). Each site is aware of things such as distributed firewall rules, so when a VM is moved from one site to another, the security settings arew preserved.

Locale IDs have also been added to provide the ability to ‘name’ a site and use the ID to direct routing traffic down specific paths, either locally on that site or via another site. This was the key takeway from the session that DRis typically slow, complex and expensive, with DR tests only being invoked annually. By providing network flexibility between sites and binding in SRM and vRO for automation, some of these issues go away.

In between times I sat the VCP-CMA exam for the second time. I sat the beta release of the exam and failed it, which was a bit of a surprise as I thought I’d done quite well. Anyway, this time I went through it, some of the questions from the beta were repeated and I answered most in the same way and this time passed easily with a 410/500. This gives me the distinction of now holding a full house of current VCPs – cloud, desktop, network and datacenter virtualisation. Once VMware Education sort out the cluster f**k that is the Advanced track, I hope to do the same at that level.

Finally I went to a quick talk called 10 Reasons Why VMware Virtual SAN Is The Best Hyperconverged Solution. Rather than go chapter and verse on each point I’ll list them below for your viewing pleasure:-

  1. VSAN is built directly into the hypervisor, giving data locality and lower latency
  2. Choice – you can pick your vendor of choice (HP, Dell, etc.) And either pick a validated, pre-built solution or ‘roll your own’ from a list of compatible controllers and hard drives from the VMware HCL
  3. Scale up or scale out, don’t pay for storage you don’t need (typically large SAN installations purchase all forecasted storage up front) and grow as you go by adding disks, SAS expanders and hosts up to 64 hosts
  4. Seamless integration with the existing VMware stack – vROps adapters already exist for management, integration with View is fully supported etc
  5. Get excellent performance using industry standard parts. No need to source specialised hardware to build a solution
  6. Do more with less – achieve excellent performance and capacity without having to buy a lot of hardware, licencing, support etc
  7. If you know vSphere, you knopw VSAN. Same management console, no new tricks or skills to learn with the default settings
  8. 2000 customers using VSAN in their production environment, 65% of whom use it for business critical applications. VSAN is also now third generation
  9. Fast moving road map – version 5.5 to 6.1 in just 18 months, much faster rate of innovation than most monolithic storage providers
  10. Future proof – engineered to work with technologies such as Docker etc

All in all a pretty productive day – four sessions and a new VCP for the collection, so I can’t complain. Also great to see and chat with friends and ex-colleagues who are also over here, which is yet another great reason to come to VMworld. It’s 10,000 people, but there’s still a strong sense of community.

Advertisements

07-07-15

Why Cloud Computing Is Like Gas And Electricity

Storm_clouds

(cjohnson7 – Flickr)

I happened to tweet a link to an article from The Register the other day regarding the price of cloud based resources in Microsoft Azure going up, which detailed price rises of 11% in the Eurozone countries and worse than that, 26% in Australia. As a result, I ended up having a brief but interesting Twitter conversation with a follower about pricing and “locked in” charges like energy companies offer here in the UK.

The over-reaching point being that cloud computing has an awful lot of variables that dictate the overall pricing. Perhaps we were all a little naive at first in the industry, thinking that Moore’s Law and such would mean a doubling of compute power at half the price at regular intervals and the pricing to continuously fall. Hell, Amazon even tell you (or they did) that the more cloud resources people buy (storage, compute, networking) then the cheaper it will get because of the economies of scale and their bulk purchasing power.

What we never really seemed to factor in back in those days was the volatility of global currencies. I think it’s a reasonable statement to say that most IT pricing is inextricably linked to the value of the US Dollar, and when this goes up and down, pricing around the world for licencing and components tends to change too. As I write this post, Greece’s economy is in the toilet with no sure way to know what will happen next. It may even be possible that other Eurozone countries are close behind, and although I have strong opinions on the Euro, let’s park those for now and concentrate on the topic in hand.

Back to the original point, gas and electricity prices are governed by the free market principles of supply and demand. As the market gets saturated, prices fall. As the market resources becomes more scarce, prices rise. Prices also rise depending on the volatility of exchange rates. Stick with me here, I will get to the point.

When cloud computing is pitched, it’s pitched as being “OK” because it’s now operational expense (OpEx) rather than a large up front capital expense (CapEx), the implication being that this will result in smaller, more predictable bite sized chunks of expenditure over a period of time. This news about Azure pricing going up 26% in the worst case means that any forward budgeting you made on prices remaining stable just got blown out of the water. 26% of anything is a lot of unbudgeted costs to find.

Where does that leave you then? Well it all depends on your business needs, but spreading the risk by using hybrid cloud solutions is one answer. Keep the “Crown Jewels” in your own DC if you can, farming off the less needy systems to cloud provider bit barns. What else can you do? Well if you’re going all in with a cloud provider, whether that’s VMware, Microsoft, Google, Amazon or anyone else, check what your escape route is. What does it cost to move your workloads? How long will it take to get out of the contract? How much time will I need to replicate workloads into the “new” cloud? Do I even have an escape clause?

I don’t profess to have the answers, and in many ways, I’m just thinking out loud. However, seeing this news has made me realise that there was a bigger picture about cloud computing I hadn’t seen before. Had you?