07-08-19

Security starts with Windows + L (or command+control+Q)

windows-l

<tap tap> Is this thing still on?

Well then, log time no blog. In case you are wondering, I joined Check Point as a Cloud Security Architect for UK & I last July, and have just celebrated my one year anniversary. I’m not quite sure how I managed to end up here, but it’s been a fantastic experience and I’ve learned absolutely tons about cyber security in that time.

As such, the topic of my first blog since 865 BC (or what feels like it!) is something that absolutely grinds my gears and I’m hoping we can raise a bit of awareness. The cyber security industry is worth billions of pounds a year and the vendors in the market make some pretty awesome products. Businesses and organisations are taking this topic more seriously than they’ve ever done, beefing up defences and increasing budgets.

CISOs and CSOs are now more commonplace and many cloud professionals are well educated on the importance of best practices such as making S3 buckets private, using security groups to control traffic flow and implementing solutions such as Check Point CloudGuard IaaS to provide deep packet inspection and IPS capabilities.

Automation gives us the ability to close any configuration gaps and perform remediation quicker than a human could spot it and fix it. This is all well and good, but attackers will always look for the easiest way to infiltrate an environment. I mean, why smash a window when the front door has been left open?

Lock your laptop, stoop.

I’ve been doing a lot of travelling the last year and it’s interesting to see how people behave. There is as much behavioural science in cyber security as there is technology. I find it staggering how many people will happily boot up their laptop in a public place, log in, open their e-mail, open a company document and then promptly get up and go to the toilet or order a coffee from the counter.

The one further observation on this is that the more “exclusive” the surroundings, the more likely it is that the individual will make this mistake. Two examples – a lady sat next to me in the lounge at Manchester Airport got ready to work and then promptly buggered off for five minutes. Similarly, I was in the first class carriage on a train and another lady from a pharma company (I won’t say which) opened a spreadsheet with an absolute f**k ton of customer data on it and then went off to the ladies (I presume, she was gone for ages) with the screen unlocked.

A better class of idiot

The one thing that connects these two examples is the fact that they took place in a more “restricted” area. Presumably the assumption is that the better “class” of people you are sat with, the smaller the chance that anything nefarious will happen. It’s impossible to say for sure if this is actually true, but shows how humans think. If I’m behind the velvet rope, all the thieving assholes are wandering through the duty free shops and drinking themselves into a coma.

Not necessarily true. Many data thieves are well funded (via legal avenues or otherwise) and so quite regularly will pop up behind the velvet rope. They’ve done their research too and have seen the same things I have. Even taking a picture of a laptop screen with a mobile phone takes seconds, you don’t even need to touch a keyboard.

We know now that once data gets out there, you can’t get it back. Whether it’s corporate data or a tweet declaring your undying love for your secondary school English teacher from way back when.

Don’t overlook the simple stuff

At a customer event a couple of months ago, I asked for a show of hands on how many organisations present had a corporate policy on locking your workstation when you aren’t in front of it. About three quarters put their hand up. I followed that up with the question of how many organisations actually enforced this policy. How many do you think? The answer was none.

It’s great that organisations moving to the cloud are really boosting their skills and knowledge around security. It’s a fast moving target and it’s hard to keep up with, but there are some things that are so simple that they often get overlooked.

Start with a policy mandating screen locking when a user walks away. Laptop, desktop, tablet, whatever. Make sure the lock screen has to be cleared by means of a password, PIN or ideally some biometrics such as fingerprint.

This policy will cost you nothing but will make a huge difference. It’s amazing, once you start doing it, it becomes habit very quickly, meaning that users away from the office will do this without thinking. You could even follow this up by advising road warriors to get a privacy screen gauze on their laptop (there are a bunch of them on Amazon or whatever your favourite e-tailer is). All small stuff, inexpensive but forms a good layer of protection against data loss.

Do it today, and do yourself a favour. Like the great Maury Finkle of Finkle’s Fixtures says..

1914937_1

 

Advertisement

05-01-18

05-01-18 : 6/7 Ain’t Bad : AWS Certified Big Data – Specialty Exam Tips

I’m pleased to say I just returned from sitting the AWS Certified Big Data Specialty exam and I managed to just about pass it first time. As always, I try and give some feedback to the community to help those who are planning on having a go themselves.

The exam itself is 65 questions over 170 minutes. In terms of difficulty, it’s definitely harder than the Associate level exams and in some cases, as tough as the Professional level exams. I didn’t feel particularly time constrained as with some other AWS exams as most of the questions are reasonably short (and a couple of them don’t make sense, meaning you need to take a best guess attempt at it).

In terms of preparation, I was lucky enough to be sent on the AWS Big Data course by my employer just before Christmas and it certainly helped but there was some exam content I didn’t remember the course covering. I also chose LinuxAcademy over A Cloud Guru, but really only for the reason that LA had hands on labs with its course and I don’t think ACG has them right now. There’s really no substitute for hands on lab to help understand a concept beyond the documentation.

I also use QwikLabs for hands on lab practice, there are a number of free labs you can use to help with some of the basics, above that for the more advanced labs, I’d recommend buying an Advantage Subscription which allows you to take unlimited labs based on a monthly charge. It’s about £40 if you’re in the UK, around $55 for US based folks. It might sound like a lot, but it’s cheaper than paying for an exam resit!

I won’t lie, Big Data is not my strong point and it’s also a topic I find quite dry, having been an infrastructure guy for 20 years or more. That being said, Big Data is a large part of the technology landscape we live in, and I always say a good architect knows a little bit about a lot of things.

As with other AWS exams, the questions are worded in a certain way. For example, “the most cost effective method”, “the most efficient method” or “the quickest method”. Maybe the latter examples are more subjective, but cost effectiveness usually wraps around S3 and Lambda as opposed to massive EMR and Redshift clusters, for example.

What should you focus on? Well the exam blueprint is always a good place to start. Some of the objectives are a bit generic, but you should have a sound grasp of what all the products are, the architecture of them and design patterns and anti-patterns (e.g. when not to use them). From here, you should be able to weed out some of the clearly incorrect answers to give you a statistically better chance of picking the correct answer.

Topic wise, I’d advise focusing on the following:-

  • Kinesis (Streams, Firehose, Analytics, data ingestion and export to other AWS services, tuning)
  • DynamoDB (Performance tuning, partitioning, use patterns and anti-patterns, indexing)
  • S3 (Patterns and anti-patterns, IA/Glacier and lifecycling, partitioning)
  • Elastic MapReduce (Products used in conjunction and what they do – Spark, Hadoop, Zeppelin, Sqoop, Pig, Hive, etc.)
  • QuickSight (Use patterns/anti-patterns, chart types)
  • Redshift (Data ingestion, data export, slicing design, indexing, schema types)
  • Instance types (compute intensive, smaller nodes of large instances vs larger nodes of smaller instances)
  • Compression (performance, compression sizes)
  • Machine Learning (machine learning model types and when you’d use them)
  • IoT (understand the basics of AWS IoT architecture)
  • What services are multi-AZ and/or multi-region and how to work around geographic constraints
  • Data Import/Export (when to use, options)
  • Security (IAM, KMS, HSM, CloudTrail)
  • CloudWatch (log files, metrics, etc.)

As with many AWS exams, the topics seem very broad, so well worth knowing a little about all of the above, but certainly focus on EMR and Redshift as they are the bedrock products of Big Data. If you know them well, I’d say you’re half way there.

You may also find Re:Invent videos especially helpful, especially the Deep Dive ones at the 300 or 400 level. The exam is passable, if I can do it, anyone can! Hopefully this blog helped you out, as there doesn’t seem to be much information out there on the exam since it went GA.

Just the Networking Specialty to do now for the full set, hopefully I’ll get that done before my SA Professional expires in June!

 

07-03-17

What is the Cloud Shared Responsibility Model and why should I care?

When I have discussions with customers moving to a public cloud provider, one of the main topics of conversation (quite rightly) is security of services and servers in the cloud. Long discussions and whiteboarding takes place where loads of boxes and arrows are drawn and in the end, the customer is confident about the long term security of their organisation’s assets when moving to Azure or AWS.

Almost as an aside, one of the questions I ask is how patching of VMs will be performed and a very common answer is “doesn’t Microsoft/AWS patch them for us?”. At this point I ask if they’ve heard of the Shared Responsibility Model and often the answer is “no”. So much so that I thought a quick blog post was in order to reinforce this point.

So then, what is the Shared Responsibility Model? Put simply, when you move services into a public cloud provider, you are responsible for some or most of the security and operational aspects of the server (tuning, anti-virus, backup, etc.) and your provider is responsible for services lower down the stack that you don’t have access to, such as the hypervisor host, physical racks, power and cooling.

That being said, there is a bit more to it than that, depending on whether or not we’re talking about IaaS, PaaS or SaaS. The ownership of responsibility can be thought of as a “sliding scale” depending on the service model. To illustrate what I mean, take a look at the diagram below, helpfully stolen from Microsoft (thanks, boys!).

Reading the diagram from left to right, you can see that in the left most column where all services are hosted on prem, it is entirely the responsibility of the customer to provide all of the security characteristics. There is no cloud provider involved and you are responsible for racking, stacking, cooling, patching, cabling, IAM and networking.

As we move right to the IaaS column, you can see subtle shades of grey emerging (quite literally) as with IaaS, you’re hosting virtual machines in a public cloud provider such as Azure or AWS. The provider is responsible for DC and rack security and some of the host infrastructure (for example, cloud providers patch the host on your behalf), but your responsibility is to ensure that workloads are effectively spread across hosts in appropriate fault and update domains for continuity of service.

Note however that in the IaaS model, as you the customer are pretty much responsible for everything from the guest upwards, it’s down to you to configure IAM, endpoint security and keep up to date with security patches. This is where a lot of the confusion creeps in. Your cloud provider is not on the hook if you fail to patch and properly secure your VMs (including network and external access). Every IaaS project requires a patching and security strategy to be baked in from day one and not retrofitted. This may mean extending on prem AD and WSUS for IAM and patching, to leverage existing processes. This is fine and will work, you don’t necessarily need to reinvent the wheel here. Plus if you re-use existing processes, it may shorten any formal on boarding of the project with Service Management.

Carrying on across the matrix to the next column on the right is the PaaS model. In this model, you are consuming pre-built features from a cloud provider. This is most commonly database services such as SQL Server or MySQL but also includes pre-built web environments such as Elastic Beanstalk in AWS. Because you are paying for a sliver of a larger, multi-tenant service, your provider will handle more layers of the lower stack, including the virtual machines the database engine is running on as well as the database engine itself. Typically in this example, the customer does not have any access to the underlying virtual machine either via SSH or RDP, as with IaaS.

However, as the matrix shows, there is still a level of responsibility on the customer (though the operational burden is reduced). In the case of Database PaaS, the customer is still in charge of backing up and securing (i.e. encryption and identity and access management) the data. This is not the responsibility of the provider with the exception of logical isolation from other tenants and the physical security of the hardware involved.

Finally, in the far right column is the SaaS model. The goal of this model is for the customer to obtain a service with as little administrative/operational overhead as possible. As shown in the matrix, the provider is responsible for everything in the stack from the application down, including networking, backup, patching, availability and physical security. IAM functions are shared as most SaaS is multi-tenant, so the provider must enforce isolation (in the same way as PaaS) and the customer must ensure only authorised personnel can access the SaaS solution.

You will note that endpoint security is also classed as a shared responsibility. Taking Office 365 as an example, Microsoft provide security tools such as anti-virus scanning and data loss prevention controls, it is up to the customer to configure this to suit their use case. Microsoft’s responsibility ends with providing the service and the customer’s starts with turning the knobs to make it work to their taste. You will also notice that as in all other cases, it is solely the customer’s responsibility to ensure the classification and accountability of the data. This is not the same as the reliability of the services beneath it (networking, storage and compute) as this is addressed in the lower layers of the model.

I hope this article provides a bit of clarity on what the Shared Responsibility Model is and why you should care. Please don’t assume that just because you’re “going cloud” that a lot of these issues will go away. Get yourself some sound and trusted advice and make sure this model is accounted for in your project plan.

For your further reading pleasure, I have included links below to documentation explaining provider’s stances and implementation of the model :-

As always, any questions or comments on this post can be left below or feel free to ping me on Twitter @ChrisBeckett