21-09-16

Achievement Unlocked : MCSA Office 365

mcsa_office365_blk

I’m pleased to say that after a couple of attempts at 70-347, I successfully passed my MCSA : Office 365 last night. For those looking at doing this certification in the near future, I just wanted to pass on the benefit of my experience. You may think, like me, that Office 365 is a pretty straight forward suite of software. In some respects, it is. It’s pretty much the same Exchange, Office, Sharepoint, etc. that you’ve always been used to, but with the additions in this exam of knowing things like subscription plan differences, AD sync and much more.

Out of the two, I found the first exam 70-346 much easier. This in some ways lures you into a false sense of security in thinking the second will be much the same. This is really where I came unstuck. I got a little bit carried away and perhaps didn’t put quite as much effort as I should have done into my study and got a bit of a kicking in the end.

Once I dusted myself down and went back over the parts I didn’t know on the exam, I felt a lot more confident last night but I still took out the insurance policy of the Microsoft Booster Pack, which is an exam voucher plus 4 resits. Yes it’s more expensive, but it takes out the risk of running up large exam bills and takes the pressure off a bit too. The promotion runs until the end of this month, so if you want to take advantage, you’d better be quick.

Anyway, each exam was around 52 questions, a couple of case studies thrown in but most were the usual drag and drop, order a list, multiple choice type formats. If you’ve sat Microsoft exams before, there shouldn’t be anything in there about the format that should surprise you.

So then, what to study?

  • PowerShell, PowerShell, PowerShell. You’ll get battered on this. Know common switches for things like user manipulation, mailbox settings, mobile devices, Lync configuration etc
  • Make sure you know all of the different Exchange migration methods and when to use them, what their advantages and disadvantages are (cutover, staged, remote move, IMAP, etc.)
  • Know the permissions model of SharePoint well – how to give anonymous access, how to remove it and how to set up site collection hierarchies
  • Install and play with AD Connect and make sure you understand how it works and how you can use it in a hybrid environment, same goes for ADFS if you don’t know that well
  • Know what integrates with Skype for Business
  • Know the plan differences well, especially Enterprise and Small Business plans. Know what is included and what isn’t
  • Did I mention PowerShell?

Resources I used :-

  • Microsoft MVA training – Managing Office 365 Identities and Services. A little dated now but still very useful
  • CBT Nuggets – very concise course giving you most of the information you need to know
  • Pluralsight – A bigger deep dive into things like SharePoint sites and administration, which was a gap for me initially

Good luck if you’re sitting this any time soon, just don’t underestimate it or it will bite you on the arse!

 

15-09-16

Office 365 Features – Quick Reference Matrix

I’ve been doing quite a bit with Office 365 lately, and I always get confused as to what services come under which plan (typical Microsoft!). You’ll also be asked about this if you’re doing the Office 365 MCSA exams (70-346 and 70-347), so well worth knowing if just for that.

The gist of it :-

  • Exchange Online, Sharepoint and Office Online (Office Web Apps) is available on every plan
  • Exchange Online, Sharepoint, Skype for Business (Lync) and OneDrive for business is available on every plan except K1 plans
  • Office ProPlus requires E3, E4 or E5 Enterprise plans
  • Yammer is included, but with caveats (see notes table)

The matrix below has been lifted from Microsoft’s site and is current as of the time of this post. Beware this can and probably will change!

o365-comparison
1
   Project Online is not included, but can be purchased as a separate add-on service or added for free to the Office 365 Education plan.
2   Yammer Enterprise is not a component of Office 365 Government, but may be acquired at no cost as a standalone offer for each user licensed for Office 365 Government Plan E1, E3, E4, and K1. This offer is currently limited to customers which purchase Office 365 Government under Enterprise Agreement and Enterprise Subscription Agreements.
3   Azure RMS is not included, but can be purchased as a separate add-on service or added for free to the Office 365 Education plan.
4    To learn more about which RMS features are included with Office 365 plans, see Comparison of Rights Management Services (RMS) Offerings .
5   Office 365 Enterprise E5 contains Cloud PBX, PSTN Conferencing, and PSTN Calling capability. To implement PSTN Calling requires an additional plan purchase (either Local or Local and International).

Hope this helps!

01-09-16

Azure VNet Peering Preview Now Available

 

ms-azure

 

One of the networking features that I liked AWS over Azure for was the ease of peering VPCs together. As a quick primer, an AWS VPC is basically your own private cloud within AWS, with subnets and instances and all that good stuff. Azure VNets are very similar in that they are a logical grouping of subnets, instances, address spaces, etc. Previously, to link VNets together, you had to use a VPN connection. That’s all well and good, but it’s a little bit clunky and in my opinion, is not as elegant as VPC peering.

Anyway, Microsoft has recently announced that VNet peering within a region is now available as a preview feature. This means that it’s available for you to try out, but be warned it’s pre-release software (much like a beta programme) and it’s a bit warts and all. It’s not meant to be used for production purposes and it is not covered by any SLAs.

The benefits of VNet peering include:-

  • Eliminates need for VPN connections between VNets
  • Connect ASM and ARM networks together
  • High speed connectivity across the Azure backbone between VNets

Many of the same restrictions that govern the use of VPC peering in AWS apply here too to VNet peering, including:-

  • Peering must occur in the same region
  • There is no transitive peering between VNets (VNet A is peered to VNet B but not to VNet C. VNet B is peered to VNet C but VNet A has no peer to VNet C)
  • There must be no overlap in the IP address space

While VNet peering is in preview, there is no charge for this service. Take a look at the documentation and give it a spin, in the test environment, obviously😉

 

19-08-16

AWS : Keeping up with the changes

aws

As we all know, working in the public cloud space means changes in the blink of an eye. Services are added, updated (and in some cases, removed) at short notice and it’s vital from not just a Solutions Architect’s perspective but from an end user or operational standpoint that we keep up to date with these announcements, as and when they happen.

In days of old, we’d keep an eye on a vendor’s annual conference when they’d reveal something cool in their keynote, with a release on that day or to follow shortly after. In the public cloud, innovation happens much quicker and it’s no longer a case of waiting for “Geek’s Christmas”.

To that end, today I was pointed towards the AWS “What’s New” blog, which in essence is a change log for AWS services. Yesterday alone lists 8 announcements or service updates.

It’s a site well worth bookmarking and reviewing on a regular basis, I’d suggest weekly if you have time. If you’re designing AWS infrastructures or running your business on AWS, you need to know what’s on the roadmap so you can plan accordingly.

You can visit the What’s New blog site here.

 

16-08-16

AWS Certified Solutions Architect Professional – Study Guide – Domain 8.0: Cloud Migration and Hybrid Architecture (10%)

Solutions-Architect-Professional

The final part of the study guide is below – thanks to all those who have tuned in over the past few weeks and given some very positive feedback. I hope it helps (or has helped) you get into the Solutions Architect Pro club. It’s a tough exam to pass and the feeling of achievement is immense. Good luck!

8.1 Plan and execute for applications migrations

  • AWS Management Portal available to plug AWS infrastructure into vCenter. This uses a virtual appliance and can enable migration of vSphere workloads into AWS
  • Right click on VM and select “Migrate to EC2”
  • You then select region, environment, subnet, instance type, security group, private IP address
  • Use cases:-
    • Migrate VMs to EC2 (VM must be powered off and configured for DHCP)
    • Reach new regions from vCenter to use for DR etc
    • Self service AWS portal in vCenter
    • Create new EC2 instances using VM templates
  • The inventory view is presented as :-
    • Region
      • Environment (family of templates and subnets in AWS)
        • Template (prototype for EC2 instance)
          • Running instance
            • Folder for storing migrated VMs
  • Templates map to AMIs and can be used to let admins pick a type for their deployment
  • Storage Gateway can be used as a migration tool
    • Gateway cached volumes (block based iSCSI)
    • Gateway stored volumes (block based iSCSI)
    • Virtual tape library (iSCSI based VTL)
    • Takes snapshots of mounted iSCSI volumes and replicates them via HTTPS to AWS. From here they are stored in S3 as snapshots and then you can mount them as EBS volumes
    • It is recommended to get a consistent snapshot of the VM by powering it off, taking a VM snapshot and then replicating this
  • AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premise data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon Elastic MapReduce (EMR).
  • AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premise data silos
  • Pipeline has the following concepts:-
    • Pipeline (container node that is made up of the items below, can run on either EC2 instance or EMR node which are provisioned automatically by DP)
    • Datanode (end point destination, such as S3 bucket)
    • Activity (job kicked off by DP, such as database dump, command line script)
    • Precondition (readiness check optionally associated with data source or activity. Activity will not be done if check fails. Standard and custom preconditions available- DynamoDBTableExists, DynamoDBDataExists, S3KeyExists, S3PrefixExists, ShellCommandPrecondition)
    • Schedule
  • Pipelines can also be used with on premises resources such as databases etc
  • Task Runner package is installed on the on premises resource to poll the Data Pipeline queue for work to do (database dump etc, copy to S3)
  • Much of the functionality has been replaced by Lambda
  • Setup logging to S3 so you can troubleshoot it

8.2 Demonstrate ability to design hybrid cloud architectures

  • Biggest CIDR block you can have is a /16 and smallest is /28 for reservations
  • First four IP addresses and last one are reserved by AWS – always 5 reserved
    • 10.0.0.0 – Network address
    • 10.0.0.1 – Reserved for VPC router
    • 10.0.0.2 – Reserved by AWS for DNS services
    • 10.0.0.3 – Reserved by AWS for future use
    • 10.0.0.255 – Reserved for network broadcast. Network broadcast not supported in a VPC, so this is reserved
  • When migrating to Direct Connect from a VPN, make the VPN connection and Direct Connect connection(s) as part of the same BGP area. Then configure the VPN to have a higher cost than the Direct Connect connection. BGP route prepending will do this as BGP is a metric based protocol. A single ASN is considered a more preferable route than an ASN with three or four values
  • For applications that require multicast, you need to configure a VPN between the EC2 instances with in-instance software, so the underlying AWS infrastructure is not aware of it. Multicast is not supported by AWS
  • VPN network must be a different CIDR block than the underlying instances are using (for example 10.x address for EC2 instances and 172.16.x addresses for VPN connection to another VPC)
  • SQL can be migrated by exporting database as flat files from SQL Management Studio, can’t replicate to another region or from on premises to AWS
  • CloudSearch can index documents stored in S3 and is powered by Apache SOLR
    • Full text search
    • Drill down searching
    • Highlighting
    • Boolean search
    • Autocomplete
    • CSV,PDF, HTML, Office docs and text files supported
  • Can also search DynamoDB with CloudSearch
  • CloudSearch can automatically scale based on load or can be manually scaled ahead of expected load increase
  • Multi-AZ is supported and it’s basically a service hosted on EC2, and these are how the costs are derived
  • EMR can be used to run batch processing jobs, such as filtering log files and putting results into S3
  • EMR uses Hadoop which uses HDFS, a distributed file system across all nodes in the cluster where there are multiple copies of the data, meaning resilience of the data and also enables parallel processing across multiple nodes
  • Hive is used to perform SQL like queries on the data in Hadoop, uses simple syntax to process large data sets
  • Pig is used to write MapReduce programs
  • EMR cluster has three components:-
    • Master node (manages data distribution)
    • Core node (stores data on HDFS from tasks run by task nodes and are managed by the master node)
    • Task nodes (managed by the master node and perform processing tasks only, do not form part of HDFS and pass processed data back to core nodes for storage)
  • EMRFS can be used to output data to S3 instead of HDFS
  • Can use spot, on demand or reserved instances for EMR cluster nodes
  • S3DistCp is an extension of DistCp that is optimized to work with AWS, particularly Amazon S3. You use S3DistCp by adding it as a step in a cluster or at the command line. Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by subsequent steps in your Amazon EMR cluster
  • Larger data files are more efficient than smaller ones in EMR
  • Storing data persistently on S3 may well be cheaper than leveraging HDFS as large data sets will require large instances sizes in the EMR cluster
  • Smaller EMR cluster with larger nodes may be just as efficient but more cost effective
  • Try to complete jobs within 59 minutes to save money (EMR billed by hour)

15-08-16

QwikLabs Competition Winner

download

Just a quick post today to say thanks to everyone who entered the QwikLabs competition and as promised, we have a winner! The random number generator picked out Hardik Mistry and he has already unwrapped his prize! Thanks again to QwikLabs for the token and for their support. If you haven’t yet swung by their site, I highly recommend it.

 

09-08-16

AWS Certified Solutions Architect Professional – Study Guide – Domain 7.0: Scalability and Elasticity (15%)

Solutions-Architect-Professional

7.1 Demonstrate the ability to design a loosely coupled system

  • Amazon CloudFront is a web service (CDN) that speeds up distribution of your static and dynamic web content, for example, .html, .css, .php, image, and media files, to end users. CloudFront delivers your content through a worldwide network of edge locations. When an end user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency, so content is delivered with the best possible performance. If the content is already in that edge location, CloudFront delivers it immediately. If the content is not currently in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.
  • CloudFront has two aspects – origin and distribution. You create a distribution and link it to an origin, such as S3, an EC2 instance, existing website etc
  • Two types of distributions, web and RTMP
  • Geo restrictions can be used to white or blacklist traffic from specific countries, blocking access to the distribution
  • GET, HEAD, PUT, POST, PATCH, DELETE and OPTIONS HTTP commands supported
  • Allowed methods are what CloudFront will pass on to the origin server. If you do not need to modify content, consider not allowing PUT, POST, PATCH, DELETE to ensure users to not modify content
  • CloudFront does not cache responses to POST, PUT, DELETE and PATCH requests, can POST content to an Edge location and then this is send on to the origin server
  • SSL can be used to provide HTTPS. Can either use CloudFront’s own certificate or use your own
    • To support older browsers, need dedicated SSL IP certificate per edge location, can be very expensive
    • SNI (Server Name Indication) custom SSL certs can be used by adding all hostnames behind the certificate but it is presented as a single IP address. Uses SNI extensions in newer browsers
  • 100 CNAME aliases per distribution, can use wildcard CNAMEs
  • Use Invalidation Requests to forcibly remove content from Edge locations. Need to use API call to do this or do it from the console, or set a TTL on the content
  • Alias records can be used to map a friendly name to a CloudFront URL (Route 53 supports this). Supports zone apex entry (name without www, such as example.com). DNS records for the same name must have the same routing type (simple, weighted, latency, etc) or you will get an error in the console
  • Alias records can then have “evaluate target” set to yes so that existing health checks are used to ensure the underlying resources are up before sending traffic onwards. If a health check for the underlying resource does not exist, evaluate target settings have no effect
  • AWS doesn’t charge for mapping alias records to CloudFront distributions
  • CloudFront supports dynamic web content using cookies to forward on to the origin server
  • Forward query strings passes the whole URL to the origin if configured in CloudFront, but only for a web server or application as S3 does not support this feature
  • Cookie values can then be logged into CloudFront access logs
  • CloudFront can be used to proxy upload requests back to the origin to speed up data transfers
  • Use a zero value TTL for dynamic content
  • Different URL patterns can send traffic to different origins
  • Whitelist certain HTTP headers such as cloudfront-viewer-country so that locale details can be passed through to the web server for custom content
  • Device detection can serve different content based on the User Agent string in the header request
  • Invalidating objects removes them from CloudFront edge caches. A faster and less expensive method is to use versioned object or directory names
  • Enable access logs in CloudFront and then send them to an S3 bucket. EMR can be used to analyse the logs
  • Signed URLs can be used to provide time limited access or access to private content on CloudFront. Signed cookies can be used to limit secure access to certain parts of the site. Use cases are signed URLs for a marketing e-mail and signed cookies for web site streaming or whole site authentication
  • Cache-control max-age header will be sent to browser to control how long the content is in the local browser cache for, can help improve delivery, especially of static items
  • If-modified-since will allow the browser to send a request for content only if it is newer than the modification date specified in the request. If the content has not changed, content is pulled from the browser cache
  • Set a low TTL for dynamic content as most content can be cached even if it’s only for a few seconds. CloudFront can also present stale data if TTL is long
  • Popular Objects report and cache statistics can help you tune CloudFront behaviour
  • Only forward cookies that are used to vary or tailor user based content
  • Use Smooth Streaming on a web distribution for live streaming using Microsoft technology
  • RTMP is true media streaming, progressive download downloads in chunks to say a mobile device. RTMP is Flash only
  • Supports existing WAF policies
  • You can create custom error response pages
  • Two ElastiCache engines available – Redis and Memcached. Exam will give scenarios and you must select the most appropriate
  • As a rule of thumb, simple caching is done by memcached and complex caching is done by Redis
  • Only Redis is multi-AZ and has backup and restore and persistence capabilities, sorting, publisher/subscriber, failover
  • Redis uses a persistence key store or caching engine for persistence
  • Redis has backup and restore and automatic failover and is best used for frequently changing data in a complex scale
  • Doesn’t need a database to backend it like memcached does
  • Leader boards is a good use case for Redis
  • Redis can be configured to use an Append Only File (AOF) that will repopulate the cache in case all nodes are lost and cache is cleared. This is disabled by default. AOF is like a replay log
  • Redis has a primary node and read only nodes. If the primary fails, a read only node is promoted to primary. Writes done to primary node, reads done from read replicas (asynchronous replication)
  • Redis snapshots are used to increase the size of nodes. This is not the same as EC2 snapshots, the snapshot creates a new node based on the snapshot and size is picked when launching
  • Redis can be configured to automatically backup daily in a window or manual snapshots. Automatic have retention limits, manual don’t
  • Memcached can scale horizontally and is multi-threaded, supports sharding
  • Memcached uses lazy loading, so if an app doesn’t get a hit from the cache, it requests it from the DB and then puts that into cache. Write through updates the cache when the database is updated
  • TTL can be used to expire out stale or unread data from the cache
  • Memcached does not maintain it’s own data persistence, database does this, scale by adding more nodes to a cluster
  • Vertically scaling memcached nodes requires standing up a new cluster of required instance sizes/types. All instance types in a cluster are the same type
  • Single endpoint for all memcached nodes
  • Put memcached nodes in different AZs
  • Memcache nodes are empty when first provisioned, bear this in mind when scaling out as this will affect cache performance while the nodes warm up
  • For low latency applications, place Memcache clusters in the same AZ as the application stack. More configuration and management but better performance
  • When deciding between Memcached and Redis, here are a few questions to consider:
    • Is object caching your primary goal, for example to offload your database? If so, use Memcached.
    • Are you interested in as simple a caching model as possible? If so, use Memcached.
    • Are you planning on running large cache nodes, and require multithreaded performance with utilization of multiple cores? If so, use Memcached.
    • Do you want the ability to scale your cache horizontally as you grow? If so, use Memcached.
    • Does your app need to atomically increment or decrement counters? If so, use either Redis or Memcached.
    • Are you looking for more advanced data types, such as lists, hashes, and sets? If so, use Redis.
    • Does sorting and ranking datasets in memory help you, such as with leaderboards? If so, use Redis.
    • Are publish and subscribe (pub/sub) capabilities of use to your application? If so, use Redis.
    • Is persistence of your key store important? If so, use Redis.
    • Do you want to run in multiple AWS Availability Zones (Multi-AZ) with failover? If so, use Redis.
  • Amazon Kinesis is a managed service that scales elastically for real-time processing of streaming data at a massive scale. The service collects large streams of data records that can then be consumed in real time by multiple data-processing applications that can be run on Amazon EC2 instances.
  • You’ll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services. The PutRecord command is used to put data into a stream
  • Data is stored in Kinesis for 24 hours, but this can go up to 7 days
  • You can use Streams for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight
  • The following are typical scenarios for using Streams
    • Accelerated log and data feed intake and processing
    • Real-time metrics and reporting
    • Real-time data analytics
    • Complex stream processing
  • An Amazon Kinesis stream is an ordered sequence of data records. Each record in the stream has a sequence number that is assigned by Streams. The data records in the stream are distributed into shards
  • A data record is the unit of data stored in an Amazon Kinesis stream. Data records are composed of a sequence number, partition key, and data blob, which is an immutable sequence of bytes. Streams does not inspect, interpret, or change the data in the blob in any way. A data blob can be up to 1 MB
  • Retention Period is the length of time data records are accessible after they are added to the stream. A stream’s retention period is set to a default of 24 hours after creation. You can increase the retention period up to 168 hours (7 days) using the IncreaseRetentionPeriod operation
  • A partition key is used to group data by shard within a stream
  • Each data record has a unique sequence number. The sequence number is assigned by Streams after you write to the stream with client.putRecords or client.putRecord
  • In summary, a record has three things:-
    • Sequence number
    • Partition key
    • Data BLOB
  • Producers put records into Amazon Kinesis Streams. For example, a web server sending log data to a stream is a producer
  • Consumers get records from Amazon Kinesis Streams and process them. These consumers are known as Amazon Kinesis Streams Applications
  • An Amazon Kinesis Streams application is a consumer of a stream that commonly runs on a fleet of EC2 instances
  • A shard is a uniquely identified group of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity
  • Once a stream is created, you can add data to it in the form of records. A record is a data structure that contains the data to be processed in the form of a data blob. After you store the data in the record, Streams does not inspect, interpret, or change the data in any way. Each record also has an associated sequence number and partition key
  • There are two different operations in the Streams API that add data to a stream, PutRecords and PutRecord. The PutRecords operation sends multiple records to your stream per HTTP request, and the singular PutRecord operation sends records to your stream one at a time (a separate HTTP request is required for each record). You should prefer using PutRecords for most applications because it will achieve higher throughput per data producer
  • An Amazon Kinesis Streams producer is any application that puts user data records into an Amazon Kinesis stream (also called data ingestion). The Amazon Kinesis Producer Library (KPL) simplifies producer application development, allowing developers to achieve high write throughput to a Amazon Kinesis stream.
  • You can monitor the KPL with Amazon CloudWatch
  • The agent is a stand-alone Java software application that offers an easier way to collect and ingest data into Streams. The agent continuously monitors a set of log files and sends new data records to your Amazon Kinesis stream. By default, records within each file are determined by a new line, but can also be configured to handle multi-line records. The agent handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner. It also emits CloudWatch metrics to help you better monitor and troubleshoot the streaming process.
  • You can install the agent on Linux-based server environments such as web servers, front ends, log servers, and database servers. After installing, configure the agent by specifying the log files to monitor and the Amazon Kinesis stream names. After it is configured, the agent durably collects data from the log files and reliably submits the data to the Amazon Kinesis stream
  • SNS is Simple Notification Services – publisher creates a topic and then subscribers get updates sent to topics. This can be push to Android, iOS, etc
  • Use SNS to send push notifications to desktops, Amazon Device Messaging, Apple Push for iOS and OSX, Baidu, Google Cloud for Android, MS push for Windows Phone and Windows Push notification services
  • Steps to create mobile push:-
    • Request credentials from mobile platforms
    • Request token from mobile platforms
    • Create platform application object
    • Publish message to mobile endpoint
  • Grid computing vs cluster computing
    • Grid computing is generally loosely coupled, often used with spot instances and tend to grow and shrink as required. Use different regions and instance types
    • Distributed workloads
    • Designed for resilience (auto scaling) – horizontal scaling rather than vertical scaling
    • Cluster computing has two or more instances working together in low latency, high throughput environments
    • Uses same instance types
    • GPU instances do not support SR-IOV networking
  • Elastic Transcoder encodes media files and uses a pipeline with a source and destination bucket, a job and a pre-set (what media type, watermarks etc). Pre-sets are templates and may be altered to provide custom settings. Pipelines can only have one source and one destination bucket
  • Integrates into SNS for job status updates and alerts