
Notes prepared to study for AWS Cloud Practitioner exam using various sources. You can also refer the notion notebook created for more updates
Security, Identity and Management
IAM (Identity and Access Management)
-
AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.
-
You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
-
IAM makes it easy to provide multiple users secure access to AWS resources.
-
When you first create an AWS account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account.
-
This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account.
-
IAM can be used to manage:
- Users.
- Groups.
- Access policies.
- Roles.
- User credentials.
- User password policies.
- Multi-factor authentication (MFA)
- API keys for programmatic access (CLI).
-
IAM provides the following features:
-
Shared access to your AWS account
Each IAM user has three main components- A user-name, A password, and Permissions to access various resources.
-
Granular permissions: You can apply granular permissions with IAM.
-
Secure access to AWS resources for application that run on Amazon EC2.
-
Multi-Factor authentication It can be enabled/enforced for the AWS account and for individual users under the account. MFA uses an authentication device that continually generates random, six-digit, single-use authentication codes. You can authenticate using an MFA device in the following ways:
- Through the AWS Management Console - the user is prompted for a user name, password and authentication code.
- Using the AWS API - restrictions are added to IAM policies and developers can request temporary security credentials and pass MFA parameters in their AWS STS API requests.
- Using the AWS CLI by obtaining temporary security credentials from STS (aws sts get-session-token).
It is a best practice to always setup multi-factor authentication on the root account. The “root account” is the account created when you setup the AWS account. It has complete Admin access and is the only account that has this access by default. It is a best practice to not use the root account for anything other than billing.
-
Identity federation (including AD, Facebook etc.) can be configured allowing secure access to resources in an AWS account without creating an IAM user account.
-
Identity information for assurance.
-
PCI DSS compliance. (Payment Card Industry Data Security Standard)
-
Integrated with many AWS services.
-
Eventually consistent.
-
Free to use.
-
-
You can work with AWS Identity and Access Management in any of the following ways:
- AWS Management Console.
- AWS Command Line Tools.
- AWS SDKs.
- IAM HTTPS API.
By default new users are created with NO access to any AWS services - they can only login to the AWS console. Permission must be explicitly granted to allow a user to access an AWS service. IAM users are individuals who have been granted access to an AWS account.
Intro to AWS Services
Cloud Computing: Cloud computing is the on-demand delivery of IT resources over the internet with pay-as-you-go pricing. On-demand delivery indicates that AWS has the resources you need, when you need them.
Deployment models for cloud computing
- Cloud based deployment: migrate existing applications to the cloud, or you can design and build new applications in the cloud.
- Run all parts of the application in the cloud.
- Migrate existing applications to the cloud.
- Design and build new applications in the cloud.
- On-premises deployment: Also known as a private cloud deployment. In this model, resources are deployed on premises by using virtualization and resource management tools.
- Deploy resources by using virtualization and resource management tools.
- Increase resource utilization by using application management and virtualization technologies.
- Hybrid deployment: cloud-based resources are connected to on-premises infrastructure.
- Connect cloud-based resources to on-premises infrastructure.
- Integrate cloud-based resources with legacy IT applications.
Benefits of cloud computing
- Trade upfront expense for variable expense
- Stop spending money to run and maintain data centers
- Stop guessing capacity
- Increased speed and agility
- Benefit from massive economies of scale
- Go global in minutes
5 Pillars of AWS Well-Architected Framework
The AWS Well-Architected Framework helps cloud architects build the most secure, high-performing, resilient, and efficient infrastructure possible for their applications. The framework provides a consistent approach for customers and AWS Partner Network (APN) Partners to evaluate architectures, and provides guidance to implement designs that scale with your application needs over time.
- Operational Excellence: ability to support development and run workloads effectively, gain insight into their operation, and continuously improve supporting processes and procedures to delivery business value. There are five design principles for operational excellence in the cloud:
- Perform operations as code
- Make frequent, small, reversible changes
- Refine operations procedures frequently
- Anticipate failure
- Learn from all operational failures
- Security: ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security. There are seven design principles for security in the cloud:
- Implement a strong identity foundation
- Enable traceability
- Apply security at all layers
- Automate security best practices
- Protect data in transit and at rest
- Keep people away from data
- Prepare for security events
- Reliability: ability of a workload to perform its intended function correctly and consistently when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle. There are five design principles for reliability in the cloud:
- Automatically recover from failure
- Test recovery procedures
- Scale horizontally to increase aggregate workload availability
- Stop guessing capacity
- Manage change in automation
- Performance Efficiency: ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. There are five design principles for performance efficiency in the cloud:
- Democratize advanced technologies
- Go global in minutes
- Use serverless architectures
- Experiment more often
- Consider mechanical sympathy
- Cost Optimization: ability to run systems to deliver business value at the lowest price point. There are five design principles for cost optimization in the cloud:
- Implement cloud financial management
- Adopt a consumption model
- Measure overall efficiency
- Stop spending money on undifferentiated heavy lifting
- Analyze and attribute expenditure
Compute in the cloud
Amazon EC2 (Elastic Compute Cloud)
Amazon Elastic Compute Cloud (Amazon EC2) provides secure, resizable compute capacity in the cloud as Amazon EC2 instances.
with an Amazon EC2 instance you can use a virtual server to run applications in the AWS Cloud.
- You can provision and launch an Amazon EC2 instance within minutes.
- You can stop using it when you have finished running a workload.
- You pay only for the compute time you use when an instance is running, not when it is stopped or terminated.
- You can save costs by paying only for server capacity that you need or want.
EC2 runs on top of physical host machines managed by AWS using virtualization technology. When you spin up an EC2 instance, you aren't necessarily taking an entire host to yourself. Instead, you are sharing the host with multiple other instances, otherwise known as virtual machines. And a hypervisor running on the host machine is responsible for sharing the underlying physical resources between the virtual machines. This idea of sharing underlying hardware is called multitenancy.
The hypervisor is responsible for coordinating this multitenancy and it is managed by AWS. The hypervisor is responsible for isolating the virtual machines from each other as they share resources from the host.
You can make instances bigger or smaller whenever you need to. You also control the networking aspect of EC2. So what type of requests make it to your server and if they are publicly or privately accessible is something you decide.
Amazon EC2 Instance Types
Amazon EC2 instance types are optimized for different tasks. When selecting an instance type, consider the specific needs of your workloads and applications.
- General Purpose Instances: provide a balance of compute, memory, and networking resources. Suppose that you have an application in which the resource needs for compute, memory, and networking are roughly equivalent. You might consider running it on a general purpose instance because the application does not require optimization in any single resource area. You can use them for a variety of workloads, such as application servers, gaming servers, backend servers for enterprise applications, small and medium databases etc.
- Compute Optimized Instances: ideal for compute-bound applications that benefit from high-performance processors. Like general purpose instances, you can use compute optimized instances for workloads such as web, application, and gaming servers. However, the difference is compute optimized applications are ideal for high-performance web servers, compute-intensive applications servers, and dedicated gaming servers. You can also use compute optimized instances for batch processing workloads that require processing many transactions in a single group.
- Memory Optimized Instances: designed to deliver fast performance for workloads that process large datasets in memory. Suppose that you have a workload that requires large amounts of data to be preloaded before running an application. This scenario might be a high-performance database or a workload that involves performing real-time processing of a large amount of unstructured data. In these types of use cases, consider using a memory optimized instance. Memory optimized instances enable you to run workloads with high memory needs and receive great performance.
- Accelerated Computing Instances: use hardware accelerators, or coprocessors, to perform some functions more efficiently than is possible in software running on CPUs. Examples of these functions include floating-point number calculations, graphics processing, and data pattern matching.
- Storage Optimized Instances: designed for workloads that require high, sequential read and write access to large datasets on local storage. Examples of workloads suitable for storage optimized instances include distributed file systems, data warehousing applications, and high-frequency online transaction processing (OLTP) systems.
Amazon EC2 Pricing
With Amazon EC2, you pay only for the compute time that you use. Amazon EC2 offers a variety of pricing options for different use cases.
-
On-Demand: ideal for short-term, irregular workloads that cannot be interrupted. No upfront costs or minimum contracts apply. The instances run continuously until you stop them, and you pay for only the compute time you use.
-
Savings Plan: AWS offers Savings Plans for several compute services, including Amazon EC2. Amazon EC2 Savings Plans enable you to reduce your compute costs by committing to a consistent amount of compute usage for a 1-year or 3-year term. This term commitment results in savings of up to 72% over On-Demand costs.
-
Reserved Instances: Reserved Instances are a billing discount applied to the use of On-Demand Instances in your account. You can purchase Standard Reserved and Convertible Reserved Instances for a 1-year or 3-year term, and Scheduled Reserved Instances for a 1-year term. You realize greater cost savings with the 3-year option.
At the end of a Reserved Instance term, you can continue using the Amazon EC2 instance without interruption. However, you are charged On-Demand rates until you do one of the following:
- Terminate the instance.
- Purchase a new Reserved Instance that matches the instance attributes (instance type, Region, tenancy, and platform).
-
Spot Instances: ideal for workloads with flexible start and end times, or that can withstand interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings at up to 90% off of On-Demand prices.
-
Dedicated Instances/Hosts: physical servers with Amazon EC2 instance capacity that is fully dedicated to your use. You can use your existing per-socket, per-core, or per-VM software licenses to help maintain license compliance. You can purchase On-Demand Dedicated Hosts and Dedicated Hosts Reservations.
Of all the Amazon EC2 options that were covered, Dedicated Hosts are the most expensive.
EC2 instance pricing varies depending on many variables:
- The buying option (On-demand, Savings Plans, Reserved, Spot, Dedicated)
- Selected instance type
- Selected Region
- Number of instances
- Load balancing
- Allocated Elastic IP Addresses
Amazon EC2 Auto Scaling
Scalability involves beginning with only the resources you need and designing your architecture to automatically respond to changing demand by scaling out or in. As a result, you pay for only the resources you use.
Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demand. By automatically scaling your instances in and out as needed, you are able to maintain a greater sense of application availability.
Within Amazon EC2 Auto Scaling, you can use two approaches: dynamic scaling and predictive scaling. To scale faster, you can use dynamic scaling and predictive scaling together.
- Dynamic scaling responds to changing demand.
- Predictive scaling **automatically schedules the right number of Amazon EC2 instances based on predicted demand.
When you create an Auto Scaling group, you can set the minimum number of Amazon EC2 instances. The minimum capacity is the number of Amazon EC2 instances that launch immediately after you have created the Auto Scaling group. Next, you can set the desired capacity at two Amazon EC2 instances even though your application needs a minimum of a single Amazon EC2 instance to run. If you do not specify the desired number of Amazon EC2 instances in an Auto Scaling group, the desired capacity defaults to your minimum capacity. The third configuration that you can set in an Auto Scaling group is the maximum capacity. For example, you might configure the Auto Scaling group to scale out in response to increased demand, but only to a maximum of four Amazon EC2 instances.
Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the instances you use, when you use them.
Elastic Load Balancing (ELB)
Elastic Load Balancing is the AWS service that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances.
A load balancer acts as a single point of contact for all incoming web traffic to your Auto Scaling group. This means that as you add or remove Amazon EC2 instances in response to the amount of incoming traffic, these requests route to the load balancer first. Then, if you have multiple Amazon EC2 instances, Elastic Load Balancing distributes the workload across the multiple instances so that no single instance has to carry the bulk of it.
Although Elastic Load Balancing and Amazon EC2 Auto Scaling are separate services, they work together to help ensure that applications running in Amazon EC2 can provide high performance and availability.
Messaging and Queuing
Applications are made of multiple components. The components communicate with each other to transmit data, fulfill requests, and keep the application running. Suppose that you have an application with tightly coupled components. These components might include databases, servers, the user interface, business logic, and so on. This type of architecture can be considered a monolithic application. In this approach to application architecture, if a single component fails, other components fail, and possibly the entire application fails.
To help maintain application availability when a single component fails, you can design your application through a microservices approach. In a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing.
When designing applications on AWS, you can take a microservices approach with services and components that fulfill different functions. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).
1. Amazon Simple Notification Service (Amazon SNS)
Amazon SNS is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers. Subscribers can be web servers, email addresses, AWS Lambda functions, or several other options.
2. Amazon Simple Queue Service (Amazon SQS)
Amazon SQS is a message queuing service. Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.
Serverless Computing - AWS Lambda
The term “serverless” means that your code runs on servers, but you do not need to provision or manage these servers. With serverless computing, you can focus more on innovating new products and features instead of maintaining servers. Another benefit of serverless computing is the flexibility to scale serverless applications automatically. Serverless computing can adjust the applications' capacity by modifying the units of consumptions, such as throughput and memory.
AWS Lambda is a service that lets you run code without needing to provision or manage servers. While using AWS Lambda, you pay only for the compute time that you consume. Charges apply only when your code is running. You can also run code for virtually any type of application or backend service, all with zero administration.
In AWS, you can also build and run containerized applications. Containers provide you with a standard way to package your application's code and dependencies into a single object.
Amazon Elastic Container Service (Amazon ECS)
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management system that enables you to run and scale containerized applications on AWS. Amazon ECS supports Docker containers. With Amazon ECS, you can use API calls to launch and stop Docker-enabled applications.
Amazon Elastic Kubernetes Service (Amazon EKS)
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that you can use to run Kubernetes on AWS.
Serverless Containers - AWS Fargate
AWS Fargate is a serverless compute engine for containers. It works with both Amazon ECS and Amazon EKS. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing.
Amazon Elastic Container Registry (ECR)
Amazon ECR works with Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS), and AWS Lambda, simplifying your development to production workflow, and AWS Fargate for one-click deployments. Or you can use ECR with your own containers environment.
AWS Outposts
AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. It extends AWS infrastructure and services to your on-premises data center. Outposts can be used to support workloads that need to remain on-premises due to low latency or local data processing needs. AWS Outposts come in two variants:
- VMware Cloud on AWS Outposts allows you to use the same VMware control plane and APIs you use to run your infrastructure
- AWS native variant of AWSOutposts allows you to use the same exact APIs and control plane you use to run in the AWS cloud, but on-premises.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
Athena is out-of-the-box integrated with AWS Glue Data Catalog, allowing you to create a unified metadata repository across various services, crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions, and maintain schema versioning. You can also use Glue’s fully-managed ETL capabilities to transform data or convert it into columnar formats to optimize cost and improve performance.
AWS Best Practices
AWS recommends some practices to help organizations avoid unexpected charges. AWS recommends some practices to help organizations avoid unexpected charges. The user will be charged once the resource is allocated (even if it is not used). Thus, it is advised that once the user’s work is completed he should:
- Delete all Elastic Load Balancers.
- Terminate all unused EC2 instances.
- Delete the attached EBS volumes that he doesn’t need.
- Release any unused Elastic IPs.
Global Infrastructure and Reliability
AWS Global Infrastructure
AWS serves over a million active customers in more than 240 countries and territories. The AWS Cloud infrastructure is built around AWS Regions and Availability Zones.
An AWS Region is a physical location in the world where we have multiple Availability Zones.
Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities.
Each Amazon Region is designed to be completely isolated from the other Amazon Regions. This achieves the greatest possible fault tolerance and stability. Each Availability Zone is isolated, but the Availability Zones in a Region are connected through low-latency links. Availability Zones are located tens of miles apart from each other. AWS provides you with the flexibility to place instances and store data within multiple geographic regions as well as across multiple Availability Zones within each AWS Region. Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood plains.
In addition to discrete uninterruptible power supply (UPS) and onsite backup generation facilities, data centers located in different Availability Zones are designed to be supplied by independent substations to reduce the risk of an event on the power grid impacting more than one Availability Zone. Availability Zones are all redundantly connected to multiple tier-1 transit providers.
Selecting a Region
When determining the right Region for your services, data, and applications, consider the following four business factors.
- Compliance with data governance and legal requirements
- Proximity to your customers
- Available services within a Region
- Pricing
Edge Locations
An edge location is a site that Amazon CloudFront uses to store cached copies of your content closer to your customers for faster delivery (basically a CDN-Content Delivery Network).
Provisioning AWS Resources
AWS uses APIs to interact with all its services. In AWS, everything is an API call. An API is an application programming interface. And what that means is, there are pre determined ways for you to interact with AWS services. And you can invoke or call these APIs to provision, configure, and manage your AWS resources.
You can use the AWS Management Console, the AWS Command Line Interface, the AWS Software Development Kits, or various other tools like AWS CloudFormation, to create requests to send to AWS APIs to create and manage AWS resources.
-
AWS Management Console
The AWS Management Console is a web-based interface for accessing and managing AWS services. Through the console, you can manage your AWS resources visually and in a way that is easy to digest. The console includes wizards and automated workflows that can simplify the process of completing tasks.
You can also use the AWS Console mobile application to perform tasks such as monitoring resources, viewing alarms, and accessing billing information. Multiple identities can stay logged into the AWS Console mobile app at the same time.
-
AWS Command Line Interface
To save time when making API requests, you can use the AWS Command Line Interface (AWS CLI). The CLI allows you to make API calls using the terminal on your machine. By using AWS CLI, you can automate the actions that your services and applications perform through scripts.
-
AWS Software development kits (SDKs)
SDKs make it easier for you to use AWS services through an API designed for your Programming language or platform. SDKs enable you to use AWS services with your existing applications or create entirely new applications that will run on AWS.
AWS Elastic Beanstalk
With AWS Elastic Beanstalk, you provide code and configuration settings, and Elastic Beanstalk deploys the resources necessary to perform the following tasks:
- Adjust capacity
- Load balancing
- Automatic scaling
- Application health monitoring
Elastic Beanstalk is a PaaS-like layer ontop of AWS's IaaS services which abstracts away the underlying EC2 instances, Elastic Load Balancers, auto scaling groups, etc. This makes it a lot easier for developers, who don't want to be dealing with all the systems stuff, to get their application quickly deployed on AWS. It's very similar to other PaaS products such as Heroku, EngineYard, Google App Engine, etc. With Elastic Beanstalk, you don't need to understand how any of the underlying magic works.
AWS CloudFormation
With AWS CloudFormation, you can treat your infrastructure as code. This means that you can build an environment by writing lines of code instead of using the AWS Management Console to individually provision resources.
CloudFormation doesn't automatically do anything. It's simply a way to define all the resources needed for deployment in a huge JSON file. So a CloudFormation template might actually create two ElasticBeanstalk environments (production and staging), a couple of ElasticCache clusters, a DyanmoDB table, and then the proper DNS in Route53. I then upload this template to AWS, walk away, and 45 minutes later everything is ready and waiting. Since it's just a plain-text JSON file, I can stick it in my source control which provides a great way to version my application deployments. It also ensures that I have a repeatable, "known good" configuration that I can quickly deploy in a different region.
AWS OpsWorks
AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.
Networking
Amazon VPC (Virtual Provate Cloud)
Amazon VPC enables you to provision an isolated section of the AWS Cloud. In this isolated section, you can launch resources in a virtual network that you define. Within a virtual private cloud (VPC), you can organize your resources into subnets. *A *subnet is a section of a VPC that can contain resources such as Amazon EC2 instances. A networking service that you can use to establish boundaries around your AWS resources is Amazon VPC.
Internet Gateway: To allow public traffic from the internet to access your VPC, you attach an internet gateway to the VPC. An internet gateway is a connection between a VPC and the internet.
Virtual Private Gateway: To access private resources in a VPC, you can use a virtual private gateway (basically a VPN).
AWS Direct Connect
AWS Direct Connect is a service that enables you to establish a dedicated private connection between your data center and a VPC. The private connection that AWS Direct Connect provides helps you to reduce network costs and increase the amount of bandwidth that can travel through your network.
Subnets and Network Access Protocol
In a VPC, subnets are separate areas that are used to group together resources. A subnet is a section of a VPC in which you can group resources based on security or operational needs. Subnets can be public or private. In a VPC, subnets can communicate with each other.
Public subnets contain resources that need to be accessible by the public, such as an online store’s website.
Private subnets contain resources that should be accessible only through your private network, such as a database that contains customers’ personal information and order histories.
Network traffic in a VPC
When a customer requests data from an application hosted in the AWS Cloud, this request is sent as a packet. A packet is a unit of data sent over the internet or a network.
It enters into a VPC through an internet gateway. Before a packet can enter into a subnet or exit from a subnet, it checks for permissions. These permissions indicate who sent the packet and how the packet is trying to communicate with the resources in a subnet.
The VPC component that checks packet permissions for subnets is a network access control list (ACL).
Network access control lists (ACLs)
A network access control list (ACL) is a virtual firewall that controls inbound and outbound traffic at the subnet level. Each AWS account includes a default network ACL. When configuring your VPC, you can use your account’s default network ACL or create custom network ACLs.
By default, your account’s default network ACL allows all inbound and outbound traffic, but you can modify it by adding your own rules. For custom network ACLs, all inbound and outbound traffic is denied until you add rules to specify which traffic to allow. Additionally, all network ACLs have an explicit deny rule. This rule ensures that if a packet doesn’t match any of the other rules on the list, the packet is denied. Network ACLs perform stateless packet filtering.
Stateless Packet Filtering - Network ACLs
Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound. When a packet response for that request comes back to the subnet, the network ACL does not remember your previous request. The network ACL checks the packet response against its list of rules to determine whether to allow or deny. After a packet has entered a subnet, it must have its permissions evaluated for resources within the subnet, such as Amazon EC2 instances.
The VPC component that checks packet permissions for an Amazon EC2 instance is a security group.
Security groups
A security group is a virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance. By default, a security group denies all inbound traffic and allows all outbound traffic. You can add custom rules to configure which traffic to allow or deny. Security groups perform stateful packet filtering.
Stateful packet filtering- Security groups
Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets. When a packet response for that request returns to the instance, the security group remembers your previous request. The security group allows the response to proceed, regardless of inbound security group rules. Both network ACLs and security groups enable you to configure custom rules for the traffic in your VPC.
Domain Name System (DNS)
DNS translates domain names to IP addresses so browsers can load Internet resources. It is a phone book of the internet. This proces of the website being displayed in the browser is called DNS resolution. DNS resolution involves a customer DNS resolver communicating with a company DNS server. DNS resolution is the process of translating a domain name to an IP address.
Amazon Route 53
Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications hosted in AWS. Amazon Route 53 connects user requests to infrastructure running in AWS (such as Amazon EC2 instances and load balancers). It can route users to infrastructure outside of AWS. Another feature of Route 53 is the ability to manage the DNS records for domain names. You can register new domain names directly in Route 53. You can also transfer DNS records for existing domain names managed by other domain registrars. This enables you to manage all of your domain names within a single location.
Example: How Amazon Route 53 and Amazon CloudFront deliver content
Storage and Databases
Instance Stores
Block-level storage volumes behave like physical hard drives. An instance store provides temporary block-level storage for an Amazon EC2 instance. An instance store is disk storage that is physically attached to the host computer for an EC2 instance, and therefore has the same lifespan as the instance. When the instance is terminated, you lose any data in the instance store.
Amazon Elastic Block Store (Amazon EBS)
Amazon Elastic Block Store (Amazon EBS) is a service that provides block-level storage volumes that you can use with Amazon EC2 instances. If you stop or terminate an Amazon EC2 instance, all the data on the attached EBS volume remains available.
To create an EBS volume, you define the configuration (such as volume size and type) and provision it. After you create an EBS volume, it can attach to an Amazon EC2 instance. Because EBS volumes are for data that needs to persist, it's important to back up the data. You can take incremental backups of EBS volumes by creating Amazon EBS snapshots.
Amazon EBS snapshots
An EBS snapshot is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved.
Amazon Simple Storage Service (Amazon S3)
In object storage, each object consists of data, metadata, and a key. The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object's key is its unique identifier.
Amazon Simple Storage Service (Amazon S3) is a service that provides object-level storage. Amazon S3 stores data as objects in buckets.
When you upload a file to Amazon S3, you can set permissions to control visibility and access to it. You can also use the Amazon S3 versioning feature to track changes to your objects over time.
Amazon S3 Storage Classes
With Amazon S3, you pay only for what you use. You can choose from a range of storage classes to select a fit for your business and cost needs. When selecting an Amazon S3 storage class, consider these two factors:
- How often you plan to retrieve your data
- How available you need your data to be
-
S3 Standard: provides high availability for objects. good choice for a wide range of use cases, such as websites, content distribution, and data analytics. S3 Standard has a higher cost than other storage classes intended for infrequently accessed data and archival storage.
- Designed for frequently accessed data
- Stores data in a minimum of three Availability Zones
-
S3 Standard-Infrequent Access (S3 Standard-IA): ideal for data infrequently accessed but requires high availability when needed. S3 Standard-IA provides the same level of availability as S3 Standard but with a lower storage price and a higher retrieval price.
Both S3 Standard and S3 Standard-IA store data in a minimum of three Availability Zones.
- Ideal for infrequently accessed data
- Similar to S3 Standard but has a lower storage price and higher retrieval price
-
S3 One Zone-Infrequent Access (S3 One Zone-IA): Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply:
- You want to save costs on storage.
- You can easily reproduce your data in the event of an Availability Zone failure.
-
S3 Intelligent Tiering: In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.
- Ideal for data with unknown or changing access patterns
- Requires a small monthly monitoring and automation fee per object
-
S3 Glacier: low-cost storage class that is ideal for data archiving.
- Low-cost storage designed for data archiving
- Able to retrieve objects within a few minutes to hours
-
S3 Glacier Deep Archive: When deciding between Amazon S3 Glacier and Amazon S3 Glacier Deep Archive, consider how quickly you need to retrieve archived objects. You can retrieve objects stored in the S3 Glacier storage class within a few minutes to a few hours. By comparison, you can retrieve objects stored in the S3 Glacier Deep Archive storage class within 12 hours.
Comparing Amazon EBS and Amazon S3
In EBS i.e. block storage corner, we can have sizes up to 16 tebibytes each, with a unique ability to survive the termination of their Amazon EC2 instances, they are solid state physical drives.
In S3 i.e. regional object storage, we have unlimited storage, with individual objects at 5,000 gigabytes in size, specializing in write once/read many, and are 99 .999 999 999% durable.
- If there are a lot of unique pictures to be stored and displayed in the web, S3 is the best because it is already web enabled as every object(here, our image) has a URL of which we can control the access. S3 also backs up all the data in various AWS regions so no need for extra backups. It is also better to store these in S3 than EBS because the price might be higher as EBS has to use EC2 instance whereas S3 being serverless requires no EC2 instance.
- If you have a large video file about 80Gb which requires editing, EBS is the best storage solution here as it only updates a block of the content which has changed, whereas in S3 after making changes you have to again upload the whole 80Gb file as its object level storage and does not break the objects into smaller parts. EBS breaks the file into various blocks and only updates the changes to the block that has been changed.
This means, if you are using complete objects or only occasional changes, S3 is victorious. If you are doing complex read, write, change functions, then, absolutely, EBS is your knockout winner.
Amazon Elastic File System (Amazon EFS)
Compared to block storage and object storage, file storage is ideal for use cases in which a large number of services and resources need to access the same data at the same time. Amazon Elastic File System (Amazon EFS) is a scalable file system used with AWS Cloud services and on-premises resources. As you add and remove files, Amazon EFS grows and shrinks automatically. It can scale on demand to petabytes without disrupting applications.
Comparing Amazon EBS and Amazon EFS
- An Amazon EBS volume stores data in a single Availability Zone. To attach an Amazon EC2 instance to an EBS volume, both the Amazon EC2 instance and the EBS volume must reside within the same Availability Zone.
- Amazon EFS is a regional service. It stores data in and across multiple Availability Zones. The duplicate storage enables you to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
Amazon Relational Database Service (Amazon RDS)
In a relational database, data is stored in a way that relates it to other pieces of data. Relational databases use structured query language (SQL) to store and query data. This approach allows data to be stored in an easily understandable, consistent, and scalable way.
Amazon Relational Database Service (Amazon RDS) is a service that enables you to run relational databases in the AWS Cloud. Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. With these capabilities, you can spend less time completing administrative tasks and more time using data to innovate your applications. You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application.
Amazon RDS is available on six database engines, which optimize for memory, performance, or input/output (I/O). Supported database engines include:
- Amazon Aurora
- PostgreSQL
- MySQL
- MariaDB
- Oracle Database
- Microsoft SQL Server
Amazon Aurora
Amazon Aurora is an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases. Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O) operations, while ensuring that your database resources remain reliable and available.
Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.
Amazon DynamoDB
In a nonrelational database, you create tables. A table is a place where you can store and query data. Nonrelational databases are sometimes referred to as “NoSQL databases” because they use structures other than rows and columns to organize data. One type of structural approach for nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys), and items have attributes (values). In a key-value database, you can add or remove attributes from items in the table at any time. Additionally, not every item in the table has to have the same attributes.
Amazon DynamoDB is a key-value database service. It delivers single-digit millisecond performance at any scale. DynamoDB is serverless, which means that you do not have to provision, patch, or manage servers. You also do not have to install, maintain, or operate software. As the size of your database shrinks or grows, DynamoDB automatically scales to adjust for changes in capacity while maintaining consistent performance. This makes it a suitable choice for use cases that require high performance while scaling.
Amazon Redshift
Amazon Redshift is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.
AWS Database Migration Service (DMS)
AWS Database Migration Service (AWS DMS) enables you to migrate relational databases, nonrelational databases, and other types of data stores.
With AWS DMS, you move data between a source database and a target database. The source and target databases can be of the same type or different types. During the migration, your source database remains operational, reducing downtime for any applications that rely on the database.
Amazon DocumentDB
Amazon DocumentDB is a document database service that supports MongoDB workloads. (MongoDB is a document database program.)
Amazon Neptune
Amazon Neptune is a graph database service. You can use Amazon Neptune to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.
Amazon Quantum Ledger Database (Amazon QLDB)
Amazon Quantum Ledger Database (Amazon QLDB) is a ledger database service. You can use Amazon QLDB to review a complete history of all the changes that have been made to your application data.
Amazon Managed Blockchain
Amazon Managed Blockchain is a service that you can use to create and manage blockchain networks with open-source frameworks. Blockchain is a distributed ledger system that lets multiple parties run transactions and share data without a central authority.
Amazon ElastiCache
Amazon ElastiCache is a service that adds caching layers on top of your databases to help improve the read times of common requests. It supports two types of data stores: Redis and Memcached.
Amazon DynamoDB Accelerator
Amazon DynamoDB Accelerator (DAX) is an in-memory cache for DynamoDB. It helps improve response times from single-digit milliseconds to microseconds.
Security
Security mechanisms offered on the AWS Cloud, like the shared responsibility model. With the shared responsibility model, AWS controls security of the cloud and customers control security in the cloud.
Shared Responsibility Model
AWS is responsible for some parts of your environment and you (the customer) are responsible for other parts. This concept is known as the shared responsibility model.
The shared responsibility model divides into customer responsibilities (commonly referred to as “security in the cloud”) and AWS responsibilities (commonly referred to as “security of the cloud”).
- Customers: Security in the cloud Customers are responsible for the security of everything that they create and put in the AWS Cloud. The customer maintains complete control over their content, services they use, and who they grant access to.
- AWS: Security of the cloud
AWS is responsible for security of the cloud. AWS operates, manages, and controls the components at all layers of infrastructure. This includes areas such as the host operating system, the virtualization layer, and even the physical security of the data centers from which services operate. AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations. AWS manages the security of the cloud, specifically the physical infrastructure that hosts your resources, which include:
- Physical security of data centers
- Hardware and software infrastructure
- Network infrastructure
- Virtualization infrastructure This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. Below are examples of controls that are managed by AWS, AWS Customers and/or both.
- Inherited Controls – Controls which a customer fully inherits from AWS.
- Physical and Environmental controls
- Shared Controls – Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Examples include:
- Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
- Customer Specific – Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services. Examples include:
- Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.
AWS Identity and Access Management (IAM)
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. IAM gives you the flexibility to configure access based on your operational and security needs. You do this by using a combination of IAM features:
- IAM users, groups, and roles
- IAM policies
- Multi-factor authentication
1. AWS account Root user
When you first create an AWS account, you begin with an identity known as the root user. The root user is accessed by signing in with the email address and password that you used to create your AWS account. It has complete access to all the AWS services and resources in the account.
2. IAM Users
An IAM user is an identity that you create in AWS. It represents the person or application that interacts with AWS services and resources. It consists of a name and credentials.
By default, when you create a new IAM user in AWS, it has no permissions associated with it. To allow the IAM user to perform specific actions in AWS, such as launching an Amazon EC2 instance or creating an Amazon S3 bucket, you must grant the IAM user the necessary permissions.
3. IAM Policies
An IAM policy is a document that allows or denies permissions to AWS services and resources. IAM policies enable you to customize users’ levels of access to resources.
4. IAM Groups
An IAM group is a collection of IAM users. When you assign an IAM policy to a group, all users in the group are granted permissions specified by the policy.
5. IAM Roles
An IAM role is an identity that you can assume to gain temporary access to permissions.
Multi-factor Authentication (MFA)
AM, multi-factor authentication (MFA) provides an extra layer of security for your AWS account.
AWS Organizations
Suppose that your company has multiple AWS accounts. You can use AWS Organizations to consolidate and manage multiple AWS accounts within a central location. When you create an organization, AWS Organizations automatically creates a root, which is the parent container for all the accounts in your organization.
In AWS Organizations, you can centrally control permissions for the accounts in your organization by using service control policies (SCPs). SCPs enable you to place restrictions on the AWS services, resources, and individual API actions that users and roles in each account can access.
- Centrally manage access polices across multiple AWS accounts.
- Automate AWS account creation and management.
- Control access to AWS services.
- Consolidate billing across multiple AWS accounts.
- Configure AWS services across multiple accounts.
AWS Federation
Federation is an AWS feature that enables users to access and use AWS resources using their existing corporate credentials.
Organizational Units
In AWS Organizations, you can group accounts into organizational units (OUs) to make it easier to manage accounts with similar business or security requirements. When you apply a policy to an OU, all the accounts in the OU automatically inherit the permissions specified in the policy. By organizing separate accounts into OUs, you can more easily isolate workloads or applications that have specific security requirements.
AWS Artifact
AWS Artifact is a service that provides on-demand access to AWS security and compliance reports and select online agreements. AWS Artifact consists of two main sections: AWS Artifact Agreements and AWS Artifact Reports.
- AWS Artifact Agreements: In AWS Artifact Agreements, you can review, accept, and manage agreements for an individual account and for all your accounts in AWS Organizations. Different types of agreements are offered to address the needs of customers who are subject to specific regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
- AWS Artifact Reports: AWS Artifact Reports provide compliance reports from third-party auditors. These auditors have tested and verified that AWS is compliant with a variety of global, regional, and industry-specific security standards and regulations. AWS Artifact Reports remains up to date with the latest reports released. You can provide the AWS audit artifacts to your auditors or regulators as evidence of AWS security controls.
Amazon Cognito
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. With Amazon Cognito, you also have the option to authenticate users through social identity providers such as Facebook, Twitter, or Amazon, with SAML identity solutions, or by using your own identity system.
Customer Compliance Center
The Customer Compliance Center contains resources to help you learn more about AWS compliance. In the Customer Compliance Center, you can read customer compliance stories to discover how companies in regulated industries have solved various compliance, governance, and audit challenges.
Denial-of-service attacks
A denial-of-service (DoS) attack is a deliberate attempt to make a website or application unavailable to users.
Distributed denial-of-service attacks
In a distributed denial-of-service (DDoS) attack, multiple sources are used to start an attack that aims to make a website or application unavailable. This can come from a group of attackers, or even a single attacker. The single attacker can use multiple infected computers (also known as “bots”) to send excessive traffic to a website or application.
AWS Shield
AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two levels of protection: Standard and Advanced.
AWS Shield Standard automatically protects all AWS customers at no cost(i.e free of cost). It protects your AWS resources from the most common, frequently occurring types of DDoS attacks. As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis techniques to detect malicious traffic in real time and automatically mitigates it.
AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to mitigate complex DDoS attacks.
AWS Key Management Service (AWS KMS)
To ensure that your applications’ data is secure while in storage (encryption at rest) and while it is transmitted, known as encryption in transit.
AWS Key Management Service (AWS KMS) enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can also control the use of keys across a wide range of services and in your applications.
AWS CloudHSM
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications using industry-standard APIs, such as PKCS#11, Java Cryptography Extensions (JCE), and Microsoft CryptoNG (CNG) libraries.
AWS Web Application Firewall (AWS WAF)
AWS WAF is a web application firewall that lets you monitor network requests that come into your web applications. AWS WAF works together with Amazon CloudFront and an Application Load Balancer. Recall the network access control lists that you learned about in an earlier module. AWS WAF works in a similar way to block or allow traffic. However, it does this by using a web access control list (ACL) to protect your AWS resources.
Amazon Inspector
To perform automated security assessments, we use Amazon Inspector. Amazon Inspector helps to improve the security and compliance of applications by running automated security assessments. It checks applications for security vulnerabilities and deviations from security best practices, such as open access to Amazon EC2 instances and installations of vulnerable software versions.
After Amazon Inspector has performed an assessment, it provides you with a list of security findings. The list prioritizes by severity level, including a detailed description of each security issue and a recommendation for how to fix it. However, AWS does not guarantee that following the provided recommendations resolves every potential security issue. Under the shared responsibility model, customers are responsible for the security of their applications, processes, and tools that run on AWS services.
Amazon GuardDuty
Amazon GuardDuty is a service that provides intelligent threat detection for your AWS infrastructure and resources. It identifies threats by continuously monitoring the network activity and account behavior within your AWS environment. After you have enabled GuardDuty for your AWS account, GuardDuty begins monitoring your network and account activity. You do not have to deploy or manage any additional security software. GuardDuty then continuously analyzes data from multiple AWS sources, including VPC Flow Logs and DNS logs.
If GuardDuty detects any threats, you can review detailed findings about them from the AWS Management Console. Findings include recommended steps for remediation. You can also configure AWS Lambda functions to take remediation steps automatically in response to GuardDuty’s security findings.
Monitoring and Analysis
It's important to set up monitoring in the cloud. With the elastic nature of AWS services that dynamically scale up and down, you'll want to keep a close pulse on your AWS resources to ensure that your systems are running as expected.
AWS CloudWatch
Amazon CloudWatch is a web service that enables you to monitor and manage various metrics and configure alarm actions based on data from those metrics. CloudWatch uses metrics to represent the data points for your resources. AWS services send metrics to CloudWatch. CloudWatch then uses these metrics to create graphs automatically that show how performance has changed over time.
CloudWatch alarms: With CloudWatch, you can create alarms that automatically perform actions if the value of your metric has gone above or below a predefined threshold.
CloudWatch dashboard: It enables you to access all the metrics for your resources from a single location.
AWS X-Ray
AWS X-Ray helps you identify performance bottlenecks of a web-application. X-Ray’s service maps let you see relationships between services and resources in your application in real time. You can easily detect where high latencies are occurring, visualize node and edge latency distribution for services, and then drill down into the specific services and paths impacting application performance.
AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
AWS CloudTrail
AWS CloudTrail records API calls for your account. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, and more. You can think of CloudTrail as a “trail” of breadcrumbs (or a log of actions) that someone has left behind them.
AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.
Events are typically updated in CloudTrail within 15 minutes after an API call. You can filter events by specifying the time and date that an API call occurred, the user who requested the action, the type of resource that was involved in the API call, and more.
CloudTrail Insights
Within CloudTrail, you can also enable CloudTrail Insights. This optional feature allows CloudTrail to automatically detect unusual API activities in your AWS account.
AWS Trusted Advisor
AWS Trusted Advisor is a web service that inspects your AWS environment and provides real-time recommendations in accordance with AWS best practices. Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits.
For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices.
Pricing and Support
AWS Free Tier
The AWS Free Tier enables you to begin using certain services without having to worry about incurring costs for the specified period.
Three types of offers are available:
- Always Free: These offers do not expire and are available to all AWS customers. For example, AWS Lambda allows 1 million free requests and up to 3.2 million seconds of compute time per month. Amazon DynamoDB allows 25 GB of free storage per month.
- 12 Months Free: These offers are free for 12 months following your initial sign-up date to AWS. Examples include specific amounts of Amazon S3 Standard Storage, thresholds for monthly hours of Amazon EC2 compute time, and amounts of Amazon CloudFront data transfer out.
- Trials: Short-term free trial offers start from the date you activate a particular service. The length of each trial might vary by number of days or the amount of usage in the service. For example, Amazon Inspector offers a 90-day free trial. Amazon Lightsail (a service that enables you to run virtual private servers) offers 750 free hours of usage over a 30-day period.
AWS Pricing
AWS offers a range of cloud computing services with pay-as-you-go pricing.
- Pay for what you use: For each service, you pay for exactly the amount of resources that you actually use, without requiring long-term contracts or complex licensing.
- Pay less when you reserve: Some services offer reservation options that provide a significant discount compared to On-Demand Instance pricing. For example, suppose that your company is using Amazon EC2 instances for a workload that needs to run continuously. You might choose to run this workload on Amazon EC2 Instance Savings Plans, because the plan allows you to save up to 72% over the equivalent On-Demand Instance capacity.
- Pay less with volume-based discounts when you use more: Some services offer tiered pricing, so the per-unit cost is incrementally lower with increased usage. For example, the more Amazon S3 storage space you use, the less you pay for it per GB.
AWS Cost Governance Best Practices:
- Resource controls: (policy-based and automated) govern who can deploy resources and the process for identifying, monitoring, and categorizing these new resources. These controls can use tools such as AWS Service Catalog, AWS Identity and Access Management (IAM) roles and permissions, and AWS Organizations, as well as third-party tools such as ServiceNow.
- Cost allocation applies to teams using resources, shifting the emphasis from the IT-as-cost-center mentality to one of shared responsibility.
- Budgeting processes include reviewing budgets and realized costs, and then acting on them.
- Architecture optimization focuses on the need to continually refine workloads to be more cost-conscious to create better architected systems.
- Tagging and tagging enforcement ensure cost tracking and visibility across organization lines.
AWS Pricing Calculator
The AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can organize your AWS estimates by groups that you define. A group can reflect how your company is organized, such as providing estimates by cost center.
AWS Billing & Cost Management dashboard
Use the AWS Billing & Cost Management dashboard to pay your AWS bill, monitor your usage, and analyze and control your costs.
Consolidated Billing
In an earlier module, you learned about AWS Organizations, a service that enables you to manage multiple AWS accounts from a central location. AWS Organizations also provides the option for consolidated billing.
The consolidated billing feature of AWS Organizations enables you to receive a single bill for all AWS accounts in your organization. By consolidating, you can easily track the combined costs of all the linked accounts in your organization. The default maximum number of accounts allowed for an organization is 4, but you can contact AWS Support to increase your quota, if needed.
Another benefit of consolidated billing is the ability to share bulk discount pricing, Savings Plans, and Reserved Instances across the accounts in your organization. For instance, one account might not have enough monthly usage to qualify for discount pricing. However, when multiple accounts are combined, their aggregated usage may result in a benefit that applies across all accounts in the organization.
AWS Budgets
In AWS Budgets, you can create budgets to plan your service usage, service costs, and instance reservations. The information in AWS Budgets updates three times a day. This helps you to accurately determine how close your usage is to your budgeted amounts or to the AWS Free Tier limits.
AWS Cost Explorer
AWS Cost Explorer is a tool that enables you to visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report of the costs and usage for your top five cost-accruing AWS services. You can apply custom filters and groups to analyze your data.
AWS Support
AWS offers four different Support plans to help you troubleshoot issues, lower costs, and efficiently use AWS services. You can choose from the following Support plans to meet your company’s needs:
-
Basic Support
Basic Support is free for all AWS customers. It includes access to whitepapers, documentation, and support communities. With Basic Support, you can also contact AWS for billing questions and service limit increases.
The Developer, Business, and Enterprise Support plans include all the benefits of Basic Support, in addition to the ability to open an unrestricted number of technical support cases. These three Support plans have pay-by-the-month pricing and require no long-term contracts.
-
Developer Support
Developer Support plan could help you to identify opportunities for combining specific services and features when you’re unsure of how to potentially use them together to build applications that can address your company’s needs.
Customers in the Developer Support plan have access to features such as:
- Best practice guidance
- Client-side diagnostic tools
- Building-block architecture support, which consists of guidance for how to use AWS offerings, features, and services together
-
Business Support
You could contact AWS Support for assistance with installing, configuring, and troubleshooting a common third-party operating system onto your Amazon EC2 instances.
Customers with a Business Support plan have access to additional features, including:
- Use-case guidance to identify AWS offerings, features, and services that can best support your specific needs
- All AWS Trusted Advisor checks
- Limited support for third-party software, such as common operating systems and application stack components
-
Enterprise Support
In addition to all the features included in the Basic, Developer, and Business Support plans, customers with an Enterprise Support plan have access to features such as:
- Application architecture guidance, which is a consultative relationship to support your company’s specific use cases and applications
- Infrastructure event management: A short-term engagement with AWS Support that helps your company gain a better understanding of your use cases. This also provides your company with architectural and scaling guidance.
- A Technical Account Manager
Technical Account Manager (TAM)
The Enterprise Support plan includes access to a Technical Account Manager (TAM). The TAM is your primary point of contact at AWS. They provide guidance, architectural reviews, and ongoing communication with your company as you plan, deploy, and optimize your applications.
AWS Marketplace
AWS Marketplace is a digital catalog that includes thousands of software listings from independent software vendors. You can use AWS Marketplace to find, test, and buy software that runs on AWS. For each listing in AWS Marketplace, you can access detailed information on pricing options, available support, and reviews from other AWS customers. You can also explore software solutions by industry and use case.
AWS Service Catalog
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.
Migration and Innovation
Guidance on migrating from on-premise deployment to AWS.
AWS Cloud Adoption Framework (AWS CAF)
At the highest level, the AWS Cloud Adoption Framework (AWS CAF) organizes guidance into six areas of focus, called Perspectives. Each Perspective addresses distinct responsibilities. The planning process helps the right people across the organization prepare for the changes ahead.
In general, the Business, People, and Governance Perspectives focus on business capabilities, whereas the Platform, Security, and Operations Perspectives focus on technical capabilities.
-
Business Perspective
The Business Perspective ensures that IT aligns with business needs and that IT investments link to key business results. Create a strong business case for cloud adoption and prioritize cloud adoption initiatives. Ensure that your business strategies and goals align with your IT strategies and goals.
Common roles in the Business Perspective include:
- Business managers
- Finance managers
- Budget owners
- Strategy stakeholders
-
People Perspective
The People Perspective supports development of an organization-wide change management strategy for successful cloud adoption. Evaluate organizational structures and roles, new skill and process requirements, and identify gaps. This helps prioritize training, staffing, and organizational changes.
Common roles in the People Perspective include:
- Human resources
- Staffing
- People managers
-
Governance Perspective
The Governance Perspective focuses on the skills and processes to align IT strategy with business strategy. This ensures that you maximize the business value and minimize risks. Understand how to update the staff skills and processes necessary to ensure business governance in the cloud. Manage and measure cloud investments to evaluate business outcomes.
Common roles in the Governance Perspective include:
- Chief Information Officer (CIO)
- Program managers
- Enterprise architects
- Business analysts
- Portfolio managers
-
Platform Perspective
The Platform Perspective includes principles and patterns for implementing new solutions on the cloud, and migrating on-premises workloads to the cloud. Use a variety of architectural models to understand and communicate the structure of IT systems and their relationships. Describe the architecture of the target state environment in detail.
Common roles in the Platform Perspective include:
- Chief Technology Officer (CTO)
- IT managers
- Solutions architects
-
Security Perspective
The Security Perspective ensures that the organization meets security objectives for visibility, auditability, control, and agility. Use the AWS CAF to structure the selection and implementation of security controls that meet the organization’s needs.
Common roles in the Security Perspective include:
-
Chief Information Security Officer (CISO)
-
IT security managers
-
IT security analysts
- Operations Perspective
The Operations Perspective helps you to enable, run, use, operate, and recover IT workloads to the level agreed upon with your business stakeholders. Define how day-to-day, quarter-to-quarter, and year-to-year business is conducted. Align with and support the operations of the business. The AWS CAF helps these stakeholders define current operating procedures and identify the process changes and training needed to implement successful cloud adoption.
Common roles in the Operations Perspective include:
- IT operations managers
- IT support managers
6 Strategies of Migrations - 6R's
- Rehosting: Rehosting also known as “lift-and-shift” involves moving applications without changes.
- Replatforming: Replatforming, also known as “lift, tinker, and shift,” involves making a few cloud optimizations to realize a tangible benefit. Optimization is achieved without changing the core architecture of the application.
- Refactoring/re-architecting: involves reimagining how an application is architected and developed by using cloud-native features. Refactoring is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment.
- Repurchasing: involves moving from a traditional license to a software-as-a-service model.
- Retaining: consists of keeping applications that are critical for the business in the source environment. This might include applications that require major refactoring before they can be migrated, or, work that can be postponed until a later time.
- Retiring: he process of removing applications that are no longer needed.
AWS Snow Family
The AWS Snow Family is a collection of physical devices that help to physically transport up to exabytes of data into and out of AWS.
AWS Snow Family is composed of AWS Snowcone, AWS Snowball, and AWS Snowmobile.
These devices offer different capacity points, and most include built-in computing capabilities. AWS owns and manages the Snow Family devices and integrates with AWS security, monitoring, storage management, and computing capabilities.
AWS Snowcone
AWS Snowcone is a small, rugged, and secure edge computing and data transfer device. It features 2 CPUs, 4 GB of memory, and 8 TB of usable storage.
AWS Snowball
AWS Snowball offers two types of devices:
-
Snowball Edge Storage Optimized devices are well suited for large-scale data migrations and recurring transfer workflows, in addition to local computing with higher capacity needs.
-
Storage: 80 TB of hard disk drive (HDD) capacity for block volumes and Amazon S3 compatible object storage, and 1 TB of SATA solid state drive (SSD) for block volumes.
-
Compute: 40 vCPUs, and 80 GiB of memory to support Amazon EC2 sbe1 instances (equivalent to C5).
-
Snowball Edge Compute Optimized provides powerful computing resources for use cases such as machine learning, full motion video analysis, analytics, and local computing stacks.
- Storage: 42-TB usable HDD capacity for Amazon S3 compatible object storage or Amazon EBS compatible block volumes and 7.68 TB of usable NVMe SSD capacity for Amazon EBS compatible block volumes.
- Compute: 52 vCPUs, 208 GiB of memory, and an optional NVIDIA Tesla V100 GPU. Devices run Amazon EC2 sbe-c and sbe-g instances, which are equivalent to C5, M5a, G3, and P3 instances.
AWS Snowmobile
AWS Snowmobile is an exabyte-scale data transfer service used to move large amounts of data to AWS. You can transfer up to 100 petabytes of data per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi trailer truck.
The AWS Well-Architected Framework
The AWS Well-Architected Framework helps you understand how to design and operate reliable, secure, efficient, and cost-effective systems in the AWS Cloud. It provides a way for you to consistently measure your architecture against best practices and design principles and identify areas for improvement.
The Well-Architected Framework is based on five pillars:
- Operational excellence: ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. Operational Excellence pillar includes the ability to run workloads effectively, gain insights into their operations, and continuously improve supporting processes to deliver business value.
- Security: ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
- Reliability: ability of a system to do the following:
- Recover from infrastructure or service disruptions
- Dynamically acquire computing resources to meet demand
- Mitigate disruptions such as misconfigurations or transient network issues
- Performance efficiency: ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. Focuses on using computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
- Cost optimization: ability to run systems to deliver business value at the lowest price point.
Benefits of the AWS Cloud
Operating in the AWS Cloud offers many benefits over computing in on-premises or hybrid environments. Six advantages are:
-
Trade upfront expense for variable expense
Upfront expenses include data centers, physical servers, and other resources that you would need to invest in before using computing resources. Instead of investing heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources.
-
Benefit from massive economies of scale
By using cloud computing, you can achieve a lower variable cost than you can get on your own.
Because usage from hundreds of thousands of customers aggregates in the cloud, providers such as AWS can achieve higher economies of scale. Economies of scale translate into lower pay-as-you-go prices.
-
Stop guessing capacity
With cloud computing, you don’t have to predict how much infrastructure capacity you will need before deploying an application.
-
Increase speed and agility
The flexibility of cloud computing makes it easier for you to develop and deploy applications.
-
Stop spending money running and maintaining data centers
Cloud computing in data centers often requires you to spend more money and time managing infrastructure and servers. A benefit of cloud computing is the ability to focus less on these tasks and more on your applications and customers.
-
Go global in minutes
The AWS Cloud global footprint enables you to quickly deploy applications to customers around the world, while providing them with low latency.
Useful Links
- AWS Certified Cloud Practitioner Notes | AWS Certification Training - Digital Cloud Training
- SDKs and Programming Toolkits for AWS
- AWS Certified Cloud Practitioner Training 2020 - Full Course
- AWS Well-Architected Framework
- AWS Well-Architected Framework [PDF]
- AWS Tutorial: A Step-by-Step Tutorial for Beginners [2022 Edition]