24-03-17

Avoiding vendor lock-in in the public cloud

A little while back, I had a pretty frank discussion with a customer about vendor lock-in in the public cloud and he left me under no illusions that he saw cloud more as a threat than an opportunity. I did wonder if there had been some incident in the past that had left him feeling this way, but didn’t really feel it appropriate to probe into that much further.

Instead of dwelling on the negatives of this situation, we decided to accentuate the positives and try to formulate some advice on how best this risk could be mitigated. This was especially important as there was already a significant investment made by the business into public cloud deployments. It is an important issue though – it’s easy enough to get in, but how do you get out? There are several strategies you could use, I’m just going to call out a couple of them as an example.

To start with, back in the days of all on premises deployments, generally you would try and go for a “best of breed” approach. You have a business problem that needs a technical solution so you look at all the potential solutions and choose the best fit based on a number of requirements. Typically these include cost, scalability, support, existing skill sets and strength of the vendor in the market (Gartner Magic Quadrant, etc.). This applies equally in the public cloud – it’s still a product set in a technical solution so the perspective needn’t change all that much.

One potential strategy is to use the best of breed approach to look at all public cloud vendors (for the purpose of this article, I really just mean the “big three” of AWS, Azure and Google Cloud Platform). As you might expect, the best cost, support and deployment options for say SQL Server on Windows would probably be from Microsoft. In that case, you deploy that part of the solution in Azure.

Conversely, you may have a need for a CDN solution and decide that AWS CloudFront represents the best solution, so you build that part of your solution around that product. This way you are mitigating risk by spreading services across two vendors while still retaining the best of breed approach.

However, “doing the splits” is not always preferable. It’s two sets of skills, two lots of billing to deal with and two vendors to punch if anything goes badly wrong.

Another more pragmatic approach is to make open source technologies a key plank of your strategy. Products such as MySQL, Postgres, Linux, Docker, Java, .NET, Chef and Puppet are widely available on public cloud platforms and mean that any effort put into these technologies can be moved elsewhere if need be (even back on premises if you need to). Not only this, but skills in the market place are pretty commoditised now and mean that bringing in new staff to help with the deployments (or even using outside parties) is made easier and more cost effective.

You could go down the road of deploying a typical web application on AWS using Postgres, Linux, Chef, Docker and Java and if for any reason later this approach becomes too expensive or other issues occur, it’s far easier to pick up the data you’ve generated in these environments, walk over to a competitor, drop it down and carry on.

Obviously this masks some of the complexities of how that move would actually take place, such as timelines, cost and skills required, but it presents a sensible approach to stakeholders that provider migration has been considered and has been accounted for in the technical solution.

The stark reality is that whatever you are doing with technology, there will always be an element of vendor lock in. Obviously from a financial perspective there is a motive for them to do that, but also this comes of innovation when a new technology is created which adds new formats and data blobs to the landscape. The key to addressing this is taking a balanced view and being able to tell project stakeholders that you’re taking a best of breed approach based on requirements and you have built in safeguards in case issues occur in future that prompt a re-evaluation of the underlying provider.

 

Advertisement

07-03-17

What is the Cloud Shared Responsibility Model and why should I care?

When I have discussions with customers moving to a public cloud provider, one of the main topics of conversation (quite rightly) is security of services and servers in the cloud. Long discussions and whiteboarding takes place where loads of boxes and arrows are drawn and in the end, the customer is confident about the long term security of their organisation’s assets when moving to Azure or AWS.

Almost as an aside, one of the questions I ask is how patching of VMs will be performed and a very common answer is “doesn’t Microsoft/AWS patch them for us?”. At this point I ask if they’ve heard of the Shared Responsibility Model and often the answer is “no”. So much so that I thought a quick blog post was in order to reinforce this point.

So then, what is the Shared Responsibility Model? Put simply, when you move services into a public cloud provider, you are responsible for some or most of the security and operational aspects of the server (tuning, anti-virus, backup, etc.) and your provider is responsible for services lower down the stack that you don’t have access to, such as the hypervisor host, physical racks, power and cooling.

That being said, there is a bit more to it than that, depending on whether or not we’re talking about IaaS, PaaS or SaaS. The ownership of responsibility can be thought of as a “sliding scale” depending on the service model. To illustrate what I mean, take a look at the diagram below, helpfully stolen from Microsoft (thanks, boys!).

Reading the diagram from left to right, you can see that in the left most column where all services are hosted on prem, it is entirely the responsibility of the customer to provide all of the security characteristics. There is no cloud provider involved and you are responsible for racking, stacking, cooling, patching, cabling, IAM and networking.

As we move right to the IaaS column, you can see subtle shades of grey emerging (quite literally) as with IaaS, you’re hosting virtual machines in a public cloud provider such as Azure or AWS. The provider is responsible for DC and rack security and some of the host infrastructure (for example, cloud providers patch the host on your behalf), but your responsibility is to ensure that workloads are effectively spread across hosts in appropriate fault and update domains for continuity of service.

Note however that in the IaaS model, as you the customer are pretty much responsible for everything from the guest upwards, it’s down to you to configure IAM, endpoint security and keep up to date with security patches. This is where a lot of the confusion creeps in. Your cloud provider is not on the hook if you fail to patch and properly secure your VMs (including network and external access). Every IaaS project requires a patching and security strategy to be baked in from day one and not retrofitted. This may mean extending on prem AD and WSUS for IAM and patching, to leverage existing processes. This is fine and will work, you don’t necessarily need to reinvent the wheel here. Plus if you re-use existing processes, it may shorten any formal on boarding of the project with Service Management.

Carrying on across the matrix to the next column on the right is the PaaS model. In this model, you are consuming pre-built features from a cloud provider. This is most commonly database services such as SQL Server or MySQL but also includes pre-built web environments such as Elastic Beanstalk in AWS. Because you are paying for a sliver of a larger, multi-tenant service, your provider will handle more layers of the lower stack, including the virtual machines the database engine is running on as well as the database engine itself. Typically in this example, the customer does not have any access to the underlying virtual machine either via SSH or RDP, as with IaaS.

However, as the matrix shows, there is still a level of responsibility on the customer (though the operational burden is reduced). In the case of Database PaaS, the customer is still in charge of backing up and securing (i.e. encryption and identity and access management) the data. This is not the responsibility of the provider with the exception of logical isolation from other tenants and the physical security of the hardware involved.

Finally, in the far right column is the SaaS model. The goal of this model is for the customer to obtain a service with as little administrative/operational overhead as possible. As shown in the matrix, the provider is responsible for everything in the stack from the application down, including networking, backup, patching, availability and physical security. IAM functions are shared as most SaaS is multi-tenant, so the provider must enforce isolation (in the same way as PaaS) and the customer must ensure only authorised personnel can access the SaaS solution.

You will note that endpoint security is also classed as a shared responsibility. Taking Office 365 as an example, Microsoft provide security tools such as anti-virus scanning and data loss prevention controls, it is up to the customer to configure this to suit their use case. Microsoft’s responsibility ends with providing the service and the customer’s starts with turning the knobs to make it work to their taste. You will also notice that as in all other cases, it is solely the customer’s responsibility to ensure the classification and accountability of the data. This is not the same as the reliability of the services beneath it (networking, storage and compute) as this is addressed in the lower layers of the model.

I hope this article provides a bit of clarity on what the Shared Responsibility Model is and why you should care. Please don’t assume that just because you’re “going cloud” that a lot of these issues will go away. Get yourself some sound and trusted advice and make sure this model is accounted for in your project plan.

For your further reading pleasure, I have included links below to documentation explaining provider’s stances and implementation of the model :-

As always, any questions or comments on this post can be left below or feel free to ping me on Twitter @ChrisBeckett

17-11-16

Azure VMs – New Auto-Shutdown Feature Sneaked In (almost!)

I saw the news the other day that Azure Backup has now been included on the VM management blade in the Azure Portal, which is great news as you don’t want to be jumping around in the portal to manage stuff where you don’t need to. However, one feature I notice that appears to have sneaked into the VM management blade without any fanfare at all is the ability to auto schedule the shutdown of a virtual machine.

Many customers request the function of shutting down virtual machines during off hours in order to save cost once any backups and scheduled maintenance tasks have occurred. Previously this would have to be done by using Azure Automation to execute a run book to shut down VMs. This is fine and a valid way of doing this, but on larger estates ends up being a costed feature as the time taken to run the run books exceeds the free tier allowances.

This typical requirement has obviously found it’s way back the product management team at Microsoft and in order to make it a lot easier when spinning up VMs to enable this, it’s been added to the standard VM management blade, as shown below:-

vm-shutdown

As far as I can tell, this feature is either not in use yet or is only available in a small number of regions, ahead of a broader roll out. I tried it on VMs in UK South and North Europe, only to see this message :-

auto-stop

And trying to read between the lines of the error message, will this feature allow starting the VM too? You’d have to hope so! I did ping Azure Support on Twitter to see when this feature would be fully available in the UK/EU and got a very speedy response (thanks, chaps!):-

 

 

So stay tuned for this feature being enabled at some point in the near future. I’d also assume there will be some corresponding PowerShell command to go with it, so that you can add it to scripted methods of deploying multiple virtual machines.

01-09-16

Azure VNet Peering Preview Now Available

 

ms-azure

 

One of the networking features that I liked AWS over Azure for was the ease of peering VPCs together. As a quick primer, an AWS VPC is basically your own private cloud within AWS, with subnets and instances and all that good stuff. Azure VNets are very similar in that they are a logical grouping of subnets, instances, address spaces, etc. Previously, to link VNets together, you had to use a VPN connection. That’s all well and good, but it’s a little bit clunky and in my opinion, is not as elegant as VPC peering.

Anyway, Microsoft has recently announced that VNet peering within a region is now available as a preview feature. This means that it’s available for you to try out, but be warned it’s pre-release software (much like a beta programme) and it’s a bit warts and all. It’s not meant to be used for production purposes and it is not covered by any SLAs.

The benefits of VNet peering include:-

  • Eliminates need for VPN connections between VNets
  • Connect ASM and ARM networks together
  • High speed connectivity across the Azure backbone between VNets

Many of the same restrictions that govern the use of VPC peering in AWS apply here too to VNet peering, including:-

  • Peering must occur in the same region
  • There is no transitive peering between VNets (VNet A is peered to VNet B but not to VNet C. VNet B is peered to VNet C but VNet A has no peer to VNet C)
  • There must be no overlap in the IP address space

While VNet peering is in preview, there is no charge for this service. Take a look at the documentation and give it a spin, in the test environment, obviously 😉

 

08-04-16

Zero to Azure MCSD in a month (or so)

lrn-certlogo-MCSD_asa_blk

Today I passed the 70-532 exam to complete my MCSD so I thought I would give some feedback for anyone else going down that road. I’ve only been hands on with Azure for about three months, so to get here from a standing start has been a major accomplishment for me. That being said, I think that with hard work and a bit of study dedication, it’s well within reach for most experienced IT pros.

Firstly, get them done as quickly as you can and don’t space them too far apart. I think from start to finish it’s taken me just over a month. I started with the 533, then the following week the 534 and then today the 532. I’d have done it sooner but I spent some time recently on a non-Azure project which meant I lost a bit of momentum. Depending on your experience, confidence and availability, I’d suggest between 1 or 2 weeks apart, certainly no more than that.

In terms of difficulty, 534 was one of the easiest exams I’ve ever sat and the result bore this out. It’s very high level and quite a few of the questions were what I would call “gimmes”. As it’s an architecture exam, you need to have a good understanding of the core Azure constructs and use cases for where they fit best.

533 was a bit harder but still well within my compass – this exam is more for people in an operational role I’d say. Lots of knowledge required about where to find knobs and things in both portals (ASM/ARM), service tiers and also plenty of PowerShell. Latterly you don’t need to be a PS guru, just understand which command to use and when and what switches are appropriate. Also differences between VM quick create and normal, for example. 

532 today was absolutely brutal and frankly I’m still amazed I managed to pass it. You need to be a hardcore developer to even know what they’re asking you. I basically read and re-read the questions and tried to apply some logic to my guesses, obviously that paid off. Not only was the content more gruelling, but there were a lot more questions than I was expecting, meaning it’s a pretty thorough test of your skills. Tip – know Visual Studio and debugging/logging well.

Another tip is do it online from home, don’t go to a test centre if you can help it. I’ve found it a lot easier to relax and focus in my home surroundings. When I did the AWS exam at my local centre it was very noisy and in some small part didn’t aid me in passing it (which I didn’t).

Which order to take them? Depends – if you’re a Visual Studio propeller head, 532 first. If you’re coming from VMware like me, either 534 or 533. There is a huge amount of overlap between the questions in each exam, so loads on networking, VMs, storage, instance sizing, IaaS and PaaS tiers, the usual stuff. When you have the essentials down pat, you can apply this knowledge across all three exams. I’d say about 60/70% of each exam used common themes, with an additional 30% relative to that specific exam.

If you’re not confident in your Azure skills, buy one of the Microsoft exam Booster Packs from here and basically brute force your way through it. 532 would have been a good use case for this tactic in my case. It also takes the pressure off, especially if you’re funding it yourself to know that you’ve got the ability to resit a few times “for free”. They’re only $200 (£141 at today’s prices), so not much more expensive than a one off exam which costs around £118 in the UK.

In terms of training, generally the CBT Nuggets were very good and concise but woeful for 532. I know they will have updated the exams since those were recorded, but there’s little in the way of actual coding explanations (though to be fair I didn’t get to the end in those videos).

I also used the official MS Press guides for each exam (532, 533 and 534), but they’re exceptionally dry and an excellent cure for insomnia. Only you know what works best for you, but I’d go for a hybrid approach of MS Press study guides, CBT Nuggets and Pluralsight, labbing stuff in Azure using your MSDN entitlement (if you have one, or get a free trial) and watch Channel9 or MVA videos on topics you’re not sure of.

Don’t also forget that the Azure exam blueprints were recently updated (March 10th), so some training guides may not include items you may be tested on, such as OMS for example. The excellent BuildAzure website has a good, concise article on what those changes are, for reference.

Do I feel like an Azure expert? Not really, no. But I’ve got a decent grasp of the concepts now and it’s up to me to build on those with some upcoming projects I have. One of the biggest challenges for cloud and especially studying for the cloud is the fact that everything moves along so quickly. One day you login to Azure and there are two new services. The following day, pricing has changed or functionality has been added to Traffic Manager, for example. It must be a major headache for the folks who write the exams!

What’s next for me is a VCAP6-DTM Design beta next week and then I’ll probably circle back for another crack at the AWS Certified Solutions Architect Pro.