AWS security specialty - vedratna/aws-learning GitHub Wiki
- You can view the AWS managed CMKs in your account, view their key policies, and audit their use in AWS CloudTrail logs. However, you cannot manage these CMKs, rotate them, or change their key policies. And, you cannot use AWS managed CMKs in cryptographic operations directly; the service that creates them uses them on your behalf. Please note that term AWS KMS managed key is different than AWS managed CMK. Actually AWS KMS managed key has two types. 1. AWS managed CMK and 2. Customer managed CMK where for type 2 customer can control all above mention activities.
- When you switch roles in the AWS Management Console, the console always uses your original credentials to authorize the switch. This applies whether you sign in as an IAM user, as a SAML-federated role, or as a web-identity federated role. For example, if you switch to RoleA, IAM uses your original user or federated role credentials to determine whether you are allowed to assume RoleA. If you then switch to RoleB while you are using RoleA, AWS still uses your original user or federated role credentials to authorize the switch, not the credentials for RoleA.
- Imported key material is supported only for symmetric CMKs in AWS KMS key stores. It is not supported on asymmetric CMKs or CMKs in custom key stores.
- AWS customers are welcome to carry out security assessments or penetration tests against their AWS infrastructure without prior approval for 8 services, listed in the next section under “Permitted Services.”
Permitted Services:
Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers
Amazon RDS
Amazon CloudFront
Amazon Aurora
Amazon API Gateways
AWS Lambda and Lambda Edge functions
Amazon Lightsail resources
Amazon Elastic Beanstalk environments
Prohibited Activities:
DNS zone walking via Amazon Route 53 Hosted Zones
Denial of Service (DoS), Distributed Denial of Service (DDoS), Simulated DoS, Simulated DDoS (These are subject to the DDoS Simulation Testing policy)
Port flooding
Protocol flooding
Request flooding (login request flooding, API request flooding)
- Penetration testing on Dynamodb is not allowed
- CMKs can be broken down into two general types: AWS-managed and customer-managed. An AWS-managed CMK is created when you choose to enable server-side encryption of an AWS resource under the AWS-managed CMK for that service for the first time (e.g., SSE-KMS). The AWS-managed CMK is unique to your AWS account and the Region in which it’s used. An AWS-managed CMK can only be used to protect resources within the specific AWS service for which it’s created. It does not provide the level of granular control that a customer-managed CMK provides. For more control, a best practice is to use a customer-managed CMK in all supported AWS services and in your applications. A customer-managed CMK is created at your request and should be configured based upon your explicit use case.
- For Redshift cluster, you can create local db user with required access (read only or read/write etc..) and then IAM policy can be configured to application with the ARN of local db user. Now when application makes a GetClusterCredentialscall to Redshift for connection, it will return the temporary credential with same access that db user has.
- Why would I need to use a custom key store? A: Since you control your AWS CloudHSM cluster, you have the option to manage the lifecycle of your CMKs independently of AWS KMS. There are four reasons why you might find a custom key store useful.
- you might have keys that are explicitly required to be protected in a single tenant HSM or in an HSM over which you have direct control.
- you might have keys that are required to be stored in an HSM that has been validated to FIPS 140-2 level 3 overall (the HSMs used in the standard AWS KMS key store are either validated or in the process of being validated to level 2 with level 3 in multiple categories).
- you might need the ability to immediately remove key material from AWS KMS and to prove you have done so by independent means.
- you might have a requirement to be able to audit all use of your keys independently of AWS KMS or AWS CloudTrail.
- I want my S3 bucket to store only objects encrypted by my KMS key. How can I do that? Ans: Use below bucket policy to restrict put object with specified encryption using specified key.
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenySSE-S3",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::samplebucketname/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
  {
            "Sid": "RequireKMSEncryption",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::samplebucketname/*",
            "Condition": {
                "StringNotLikeIfExists": {
                    "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/*"
                }
            }
        }
    ]
}
- Launch constraints allow an AWS Service Catalog end user to launch an AWS Service Catalog product without requiring elevated permissions to AWS resources. It means end user would be able to launch the resource through service catalog, he/she doesn't have permission to do so otherwise.
- When you implement data key caching, you need to configure the security thresholds that the caching CMM enforces.The security thresholds help you to limit how long each cached data key is used and how much data is protected under each data key. The caching CMM returns cached data keys only when the cache entry conforms to all of the security thresholds. If the cache entry exceeds any threshold, the entry is not used for the current operation and it is evicted from the cache as soon as possible. The first use of each data key (before caching) is exempt from these thresholds.
- Maximum age (required): Determines how long a cached entry can be used, beginning when it was added. This value is required. Enter a value greater than 0. The AWS Encryption SDK does not limit the maximum age value. Use the shortest interval that still allows your application to benefit from the cache. You can use the maximum age threshold like a key rotation policy. Use it to limit reuse of data keys, minimize exposure of cryptographic materials, and evict data keys whose policies might have changed while they were cached.
- Maximum messages encrypted (optional): Specifies the maximum number of messages that a cached data key can encrypt. This value is optional. Enter a value between 1 and 2^32 messages. The default value is 2^32 messages. Set the number of messages protected by each cached key to be large enough to get value from reuse, but small enough to limit the number of messages that might be exposed if a key is compromised.
- Maximum bytes encrypted (optional): Specifies the maximum number of bytes that a cached data key can encrypt. This value is optional. Enter a value between 0 and 2^63 - 1. The default value is 2^63 - 1. A value of 0 lets you use data key caching only when you are encrypting empty message strings. The bytes in the current request are included when evaluating this threshold. If the bytes processed, plus current bytes, exceed the threshold, the cached data key is evicted from the cache, even though it might have been used on a smaller request.
- You can use AWS Firewall Manager security group policies to manage Amazon Virtual Private Cloud security groups for your organization in AWS Organizations. You can apply centrally controlled security group policies to your entire organization or to a select subset of your accounts and resources. You can also monitor and manage the security group policies that are in use in your organization, with auditing and usage security group policies. Firewall Manager continuously maintains your policies and applies them to accounts and resources as they are added or updated across your organization. You can use Firewall Manager security group policies to do the following across your AWS organization:
- Apply common security groups to specified accounts and resources.
- Audit security group rules, to locate and remediate noncompliant rules.
- Audit usage of security groups, to clean up unused and redundant security groups.
- AWS Config resources required for CIS controls: To run security checks for the enabled controls on your environment's resources, Security Hub either runs through the exact audit steps prescribed for the checks in Securing Amazon Web Services or uses specific AWS Config managed rules. Inspector is for CIS OS Benchmarking(with inspector agent on ec2), AWS config is for CIS Foundations Benchmarks.
- You can control access so that instances can run only parameters that you specify. If you choose the SecureString parameter type when you create your parameter, Systems Manager uses AWS KMS to encrypt the parameter value. AWS KMS encrypts the value by using either an AWS managed key or a customer managed key. The following example policy allows instances to get a parameter value only for parameters that begin with prod-. If the parameter is a SecureString parameter, then the instance decrypts the string using AWS KMS.
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "ssm:GetParameters"
         ],
         "Resource":[
            "arn:aws:ssm:region:account-id:parameter/prod-*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "kms:Decrypt"
         ],
         "Resource":[
            "arn:aws:kms:region:account-id:key/KMSkey"
         ]
      }
   ]
}
- If your customer managed CMK is in a different account than the Auto Scaling group, you must use a grant in combination with the key policy to allow access to the CMK. For more information, see Using grants in the AWS Key Management Service Developer Guide.
- First, add the following two policy statements to the CMK's key policy, replacing the example ARN with the ARN of the external account, and specifying the account in which the key can be used. This allows you to use IAM policies to give an IAM user or role in the specified account permission to create a grant for the CMK using the CLI command that follows. Giving the account full access to the CMK does not by itself give any IAM users or roles access to the CMK.
{
   "Sid": "Allow external account 111122223333 use of the CMK",
   "Effect": "Allow",
   "Principal": {
       "AWS": [
           "arn:aws:iam::111122223333:root"
       ]
   },
   "Action": [
       "kms:Encrypt",
       "kms:Decrypt",
       "kms:ReEncrypt*",
       "kms:GenerateDataKey*",
       "kms:DescribeKey"
   ],
   "Resource": "*"
}
{
   "Sid": "Allow attachment of persistent resources in external account 111122223333",
   "Effect": "Allow",
   "Principal": {
       "AWS": [
           "arn:aws:iam::111122223333:root"
       ]
   },
   "Action": [
       "kms:CreateGrant"
   ],
   "Resource": "*"
}
- Then, from the external account, create a grant that delegates the relevant permissions to the appropriate service-linked role. The Grantee Principal element of the grant is the ARN of the appropriate service-linked role. The key-id is the ARN of the CMK. The following is an example create-a-grant CLI command that gives the service-linked role named AWSServiceRoleForAutoScaling in account 111122223333 permissions to use the CMK in account 444455556666.
aws kms create-grant \
  --region us-west-2 \
  --key-id arn:aws:kms:us-west-2:444455556666:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d \
  --grantee-principal arn:aws:iam::111122223333:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling \
  --operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"
- For this command to succeed, the user making the request must have permissions for the CreateGrant action. The following example IAM policy allows an IAM user or role in account 111122223333 to create a grant for the CMK in account 444455556666.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowCreationOfGrantForTheCMKinExternalAccount444455556666",
      "Effect": "Allow",
      "Action": "kms:CreateGrant",
      "Resource": "arn:aws:kms:us-west-2:444455556666:key/1a2b3c4d-5e6f-1a2b-3c4d-5e6f1a2b3c4d"
    }
  ]
}
- You can have CloudTrail deliver log files from multiple AWS accounts into a single Amazon S3 bucket. For example, you have four AWS accounts with account IDs 111111111111, 222222222222, 333333333333, and 444444444444, and you want to configure CloudTrail to deliver log files from all four of these accounts to a bucket belonging to account 111111111111. To accomplish this, complete the following steps in order:
- Turn on CloudTrail in the account where the destination bucket will belong (111111111111 in this example). Do not turn on CloudTrail in any other accounts yet.
- Update the bucket policy on your destination bucket to grant cross-account permissions to CloudTrail.
- Turn on CloudTrail in the other accounts you want (222222222222, 333333333333, and 444444444444 in this example). Configure CloudTrail in these accounts to use the same bucket belonging to the account that you specified in step 1 (111111111111 in this example).
- If you try to add, modify, or remove a log file prefix for an S3 bucket that receives logs from a trail, you may see the error: There is a problem with the bucket policy. A bucket policy with an incorrect prefix can prevent your trail from delivering logs to the bucket. To resolve this issue, use the Amazon S3 console to update the prefix in the bucket policy, and then use the CloudTrail console to specify the same prefix for the bucket in the trail.
- Perfect Forward Secrecy on load balancer: This security feature uses a derived session key to provide additional safeguards against the eavesdropping of encrypted data. This prevents the decoding of captured data, even if the secret long-term key is compromised. To begin using Perfect Forward Secrecy, configure your load balancer with the newly added Elliptic Curve Cryptography (ECDHE) cipher suites. Most major browsers now support these newer and more secure cipher suites. Our next feature enables your load balancer to prefer using these stronger cipher suites for communication.
- The AWS Management Console doesn't support role chaining. You can use the switch role feature in the Console to get a role's temporary credentials. The Console uses the credentials of the IAM or federated user to switch to another role. For more information, see switching to a role (console).
- You can configure an Application Load Balancer to securely authenticate users as they access your applications. This enables you to offload the work of authenticating users to your load balancer so that your applications can focus on their business logic.The following use cases are supported:
- Authenticate users through an identity provider (IdP) that is OpenID Connect (OIDC) compliant.
- Authenticate users through social IdPs, such as Amazon, Facebook, or Google, through the user pools supported by Amazon Cognito.
- Authenticate users through corporate identities, using SAML, LDAP, or Microsoft AD, through the user pools supported by Amazon Cognito.
- Support for groups in Amazon Cognito user pools enables you to create and manage groups, add users to groups, and remove users from groups. You can use groups to create a collection of users in a user pool, which is often done to set the permissions for those users. For example, you can create separate groups for users who are readers, contributors, and editors of your website and app. Using the IAM role associated with a group, you can also set different permissions for those different groups so that only contributors can put content into Amazon S3 and only editors can publish content through an API in Amazon API Gateway.
- Login, connection and any activity done through aws session manager is logged and traceable.
- The EC2Rescue for Windows Server command line interface (CLI) allows you to run an EC2Rescue for Windows Server plugin (referred as an "action") programmatically.The EC2Rescue for Windows Server tool has two execution modes:
- online—This allows you to take action on the instance that EC2Rescue for Windows Server is installed on, such as collect log files.
- offline:<device_id>—This allows you to take action on the offline root volume that is attached to a separate Amazon EC2 Windows instance, on which you have installed EC2Rescue for Windows Server.
- Using EC2Rescue, You can collect all logs, an entire log group, or an individual log within a group.Useful to collect memory dumps of compromised instances be captured for further analysis
- When you use imported key material, you remain responsible for the key material while allowing AWS KMS to use a copy of it. You might choose to do this for one or more of the following reasons:
- To set an expiration time for the key material in AWS and to manually delete it, but to also make it available again in the future. In contrast, scheduling key deletion requires a waiting period of 7 to 30 days, after which you cannot recover the deleted CMK.
- This would be the only way to "delete" a key earlier than 7 days.
- Only Customer managed CMK can allow you to define frequency of key rotation such as annually.
- Enabling server-side encryption encrypts the log files but not the digest files with SSE-KMS. Digest files are encrypted with Amazon S3-managed encryption keys (SSE-S3).
- Can other AWS services use CloudHSM to store and manage keys? Ans: AWS services integrate with AWS Key Management Service, which in turn is integrated with AWS CloudHSM through the KMS custom key store feature. If you want to use the server-side encryption offered by many AWS services (such as EBS, S3, or Amazon RDS), you can do so by configuring a custom key store in AWS KMS.
- If you have accidentally deleted the imported key material in CMK, then you can download the new wrapping key and import token and import the original key into the existing CMK.
- To connect to services such as EC2 using just Direct Connect you need to create a private virtual interface. However, if you want to encrypt the traffic flowing through Direct Connect, you will need to use the public virtual interface of DX to create a VPN connection that will allow access to AWS services such as S3, EC2, and other services.
- To be certain that all root user activity is authorized and expected, it is important to monitor root API calls to a given AWS account and to notify when this type of activity is detected.This is how it can be done:
- An Amazon CloudWatch Events rule detects any AWS account root user API events.
- It triggers an AWS Lambda function.
- The Lambda function then processes the root API event. It also publishes a message to an Amazon SNS topic, where the subject contains the AWS account ID or AWS account alias where the root API call was detected and the type of API activity. The SNS topic then sends notifications to its email subscribers about this event.
- You can enable logging to get detailed information about traffic that is analyzed by your web ACL. Information that is contained in the logs includes the time that AWS WAF received the request from your AWS resource, detailed information about the request, and the action for the rule that each request matched.
- In the logging configuration for your web ACL, you can customize what AWS WAF sends to the logs as follows:
- Log filtering – You can add filtering to specify which web requests are kept in the logs and which are dropped. You can filter on the rule action and on the web request labels that were applied during the request evaluation.
- Field redaction – You can redact some fields from the log records. Redacted fields appear as XXX in the logs. For example, if you redact the URI field, the URI field in the logs will be XXX. For a list of the log fields, see Log Fields.
- You send logs from your web ACL to an Amazon Kinesis Data Firehose with a configured storage destination. After you enable logging, AWS WAF delivers logs to your storage destination through the HTTPS endpoint of Kinesis Data Firehose.
- In KMS, Automatic Rotation Happens every 3 years.
- Every CMK must have a key policy. You can also use IAM policies to control access to a CMK, but only if the key policy allows it. If the key policy doesn't allow it, IAM policies that attempt to control access to a CMK are ineffective. To allow IAM policies to control access to a CMK, the key policy must include a policy statement that gives the AWS account full access to the CMK, like the following one. The following example shows the policy statement that gives an example AWS account full access to a CMK. This policy statement lets the account use IAM policies, along with key policies, to control access to the CMK.
{
  "Sid": "Enable IAM policies",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111122223333:root"
   },
  "Action": "kms:*",
  "Resource": "*"
}
- Policies can be IAM Based, Resource Based, Organisation SCP and Permission Boundary. Please note that SCP and User Permission Boundary doesn't permit anything but that decides the boundary of permission, beyond that user won't be authorised access even if IAM or Resource Based policy allows it. General rule is explicit DENY always takes precedence over explicit ALLOW.
- Resource policy can be used to grant cross account access to s3 bucket, below is the example policy that grants full s3 access to account 453314488441 on s3 bucket demo-crossover which is created in some other account.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "111",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::453314488441:root"
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::demo-crossover",
                "arn:aws:s3:::demo-crossover/*"
            ]
        }
    ]
}
- Also note one more thing in above mentioned policy. Resource "arn:aws:s3:::demo-crossover" provides permission at bucket level only. It won't let you access any object inside that bucket (It won't allow to Copy, Update, Download of any object). Resource "arn:aws:s3:::demo-crossover/*" provides object level permission.
- Every object in s3 has ACL associated with it that gives full control to object owner. This creates strange scenario when any object gets uploaded through cross account. In that case account in which object is uploaded wouldn't have access to it but the account through which it is uploaded has full access on it (As ACL gives full control to owner). To avoid this situation, during cross account upload make sure to pass x-amz-aclheader with valueBucket-owner-full-control. This gives full control to bucket owner as well as object owner.
- When we like to Deny access to everyone except specific user to the resource like s3, NotPrinciPalelement can be used with explicit deny policy. In below s3 bucket policy, everyone except user Alice would be denied to access specified s3 bucket (BUCKETNAME). Please note that it is always implicit Deny, hence here even Alice user is not allowed to access this s3 bucket until it has IAM policy associated with it's user that explicitly allowed him to access this s3 bucket. For everyone else, it is going to be denied all the time even if they have appropriate allow policy with their IAM user. (Explicit Deny > Explicit Allow)
{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Deny",
        "NotPrincipal": {"AWS": [
            "arn:aws:iam::888913816489:user/Alice"
        ]},
        "Action": "s3:*",
        "Resource": [
            "arn:aws:s3:::BUCKETNAME",
            "arn:aws:s3:::BUCKETNAME/*"
        ]
    }]
}
- Version Element: 2008-10-17 doesn't support variables like ${aws:username}. But it is default hence it is recommended to specify the version element with the value 2012-10-17 explicitly for full support of variables.
- Condition Policy Example
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "*",
        "Resource": "*",
        "Condition": {
            "IpAddress": {
                "aws:SourceIp": "115.99.177.174/32"
            }
        }
    }
}
- Dedicated instances may share the hardware with other EC2 instances that belong to the same AWS accounts.
- Dedicated Host is a physical server that allows us to use our existing per-socket, per-core or even per-VM based software licenses which includes Windows Server, SUSE, and various others.With dedicated hosts, we can use the same physical server over time, even if the instance is stopped and started.
- If we create a new instance from AMI of older instance, the public key specified while AMI creation will be appended to the authorized_keys file. In short older public key added through older instance would remain there.
- Deleting the key-pair from the console will not delete the associated key from the EC2 instance.
- With IPTABLES, you can block access to the instance meta-data for the common users within servers.
- In the Gateway endpoints approach (available only for Dynamodb and S3), the VPC endpoint was created outside your VPC and traffic was routed via route table. Thus, it was not possible to use it directly from VPN’s or Direct connects.
- Disabling/Removing compromised access keys doesn't prevent user from accessing resources through temporary credentials generated through same user access keys. (For example through get-session-token api, until it expires). Best way to prevent access in this case would be add All Denyin line policy for the IAM User whose access key is compromised.
- When you import key material into a CMK, the CMK is permanently associated with that key material. You can reimport the same key material, but you cannot import different key material into that CMK. Also, you cannot enable automatic key rotation for a CMK with imported key material. However, you can manually rotate a CMK with imported key material.
- Interesting use case: https://www.1strategy.com/blog/2018/01/09/ec2-encrypted-ebs-and-iam-users/
- In case of assume role, trust relationship should be added to role that allows specific service to assume that role
- Protocol Number
- 1 ICMP Internet Control Message [RFC792]
- 4 IPv4 IPv4 encapsulation [RFC2003]
- 6 TCP Transmission Control [RFC793]
- 17 UDP User Datagram
- It reads only CludTrail logs, VPC flow logs and AWS DNS logs
- GuardDuty will only monitor route53 for DNS logs; it won't monitor Active Directory or any other Custom DNS logs
- GuardDuty allows customer to generate their own trusted ip list. GuardDuty won't generate findings for trusted ips.
- Findings can be suppressed using various finding types in case those findings are known and you wanted to remove them from the finding list to avoid the noise
- GuardDuty supports threatened ip list. Guard Duty will generate findings if it founds any ip from threatened list inside logs.
- https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-active.html
- It is possible to centralised GuardDuty findings of various accounts into one Master account by adding those accounts as member accounts to the Master account
- Four types of scan
- CVE (Common type of Vulnerabilities and Exposure)
- CIS Benchmarking
- Security Best Practices
- Network Reachability (This doesn't require Inspector Agent to be installed on EC2)
- 
It can be associated with CloudFront, APIGateway and ALB only. 
- 
You can create RuleGroup that contains multiple rules and can be used across multiple WACLs. 
- 
Some Detail Descriptions can be read here 
Systems Manager allows you to remotely execute commands on managed hosts without using a bastion host (you might know this feature as EC2 Run Command). A host-based agent polls Systems Manager to determine whether a command awaits execution. Here are some of the benefits:
- This approach uses an AWS managed service, meaning that the Systems Manager components are reliable and highly available.
- Systems Manager requires an IAM policy that allows users or roles to execute commands remotely.
- Systems Manager agents require an IAM role and policy that allow them to invoke the Systems Manager service.
- Systems Manager immutably logs every executed command, which provides an auditable history of commands, including:
- The executed command
- The principal who executed it
- The time when the command was executed
- An abbreviated output of the command
- When AWS CloudTrail is enabled to record and log events in the region where you’re running Systems Manager, every event is recorded by CloudTrail and logged in Amazon CloudWatch Logs.
- Using CloudTrail and CloudWatch rules gives you the ability to use Systems Manager events as triggers for automated responses, such as Amazon SNS notifications or AWS Lambda function invocations.
- Systems Manager can optionally store command history and the entire output of each command in Amazon S3.
- Systems Manager can optionally post a message to an SNS topic, notifying subscribed individuals when commands execute and when they complete.
- Systems Manager is agent-based, which means it is not restricted to Amazon EC2 instances. It can also be used on non-AWS hosts that reside on your premises, in your data center, in another cloud service provider, or elsewhere.
- You don’t have to manage SSH keys.
- EC2 IAM Role policy to enable CloudWatch log agent to push logs to CloudWatch Logs. This enables CloudWatch Agent to create CloudWatch LogGroup -> LogStream and push Log events.
{
"Version": "2012-10-17",
"Statement": [
  {
    "Effect": "Allow",
    "Action": [
      "logs:CreateLogGroup",
      "logs:CreateLogStream",
      "logs:PutLogEvents",
      "logs:DescribeLogStreams"
    ],
    "Resource": [
      "arn:aws:logs:*:*:*"
    ]
  }
 ]
}
- Modify the /etc/awslogs/awscli.confconfiguration file to include the region where you want the log group to be created
- 
/etc/aws_logs/awslogs.confhas a configuration that defines log group name, log stream name etc.
- Business Plan gives following checks
- Cost Optimisation
- Performance
- Security
- Fault Tolerence
- Service Limit
- Free tier has limited service only on security and performance
- Business tier can get weekly updates via email
- Flow logs can be enabled at VPC level or at Subnet level or at Network Interface level
- Format of log line of Flow logs
2 18208607422 eni-fa865da4 172.31.10.50 172.31.10.69 443 10828 6 2 112 1527046252 1527046310 ACCEPT OK
- version - The VPC Flow Logs Version
- account-id - AWS Account ID
- interface-id - The network interface id
- srcaddr - The source address
- destaddr - Destination Address
- src port - Source Port
- dest port - Destination Port
- protocol - The protocol number
- packets - Number of packets transferred
- bytes - Number of bytes transferred
- start - Start time in unix seconds
- end - End time in unix seconds
- action - ACCEPT or REJECT
- log status - Logging status of flow log
- AWS KMS retains all key material for a CMK, even if key rotation is disabled. Key material is deleted only when the CMK is deleted. When you use a CMK to encrypt, AWS KMS uses the current key material. When you use the CMK to decrypt, AWS KMS uses the key material that was used to encrypt.
- Automatic key rotation is disabled by default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter.
- While a CMK is disabled, AWS KMS does not rotate it. However, the key rotation status does not change, and you cannot change it while the CMK is disabled. When the CMK is re-enabled, if the key material is more than 365 days old, AWS KMS rotates it immediately and every 365 days thereafter. If the key material is less than 365 days old, AWS KMS resumes the original key rotation schedule.
- While a CMK is pending deletion, AWS KMS does not rotate it. The key rotation status is set to false and you cannot change it while deletion is pending. If deletion is canceled, the previous key rotation status is restored. If the key material is more than 365 days old, AWS KMS rotates it immediately and every 365 days thereafter. If the key material is less than 365 days old, AWS KMS resumes the original key rotation schedule.
- You cannot manage key rotation for AWS managed CMKs. AWS KMS automatically rotates AWS managed CMKs every three years (1095 days).
- You cannot manage key rotation for AWS owned CMKs. The key rotation strategy for an AWS owned CMK is determined by the AWS service that creates and manages the CMK. For details, see the Encryption at Rest topic in the user guide or developer guide for the service.
- You can enable automatic key rotation on the customer managed CMKs that you use for server-side encryption in AWS services. The annual rotation is transparent and compatible with AWS services.
- Multi-Region CMKs. You can enable and disable automatic key rotation for multi-Region keys. You set the property only on the primary key. When AWS KMS synchronizes the keys, it copies the property setting from the primary key to its replica keys. When the key material of the primary key is rotated, AWS KMS automatically copies that key material to all of its replica keys. For details, see Rotating multi-Region keys.
- When AWS KMS automatically rotates the key material for an AWS managed CMK or customer managed CMK, it writes a CMK Rotation event to Amazon CloudWatch Events and a RotateKey event to your AWS CloudTrail log. You can use these records to verify that the CMK was rotated.
- Automatic key rotation is not supported on the following types of CMKs, but you can rotate these CMKs manually.
- Asymmetric CMKs
- CMKs in custom key stores
- CMKs that have imported key material
To successfully revoke session permissions from a role, you must have the PutRolePolicy permission for the role. This allows you to attach the AWSRevokeOlderSessions inline policy to the role.
- You can revoke the session permissions from a role.To immediately deny all permissions to any current user of role credentials
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
- In the navigation pane of the IAM Dashboard, choose Roles, and then choose the name (not the check box) of the role whose permissions you want to revoke.
- On the Summary page for the selected role, choose the Revoke sessions tab.
- On the Revoke sessions tab, choose Revoke active sessions.
- AWS asks you to confirm the action. Choose Revoke active sessions on the dialog box.
- IAM immediately attaches a policy named AWSRevokeOlderSessions to the role. The policy denies all access to users who assumed the role before the moment you chose Revoke active sessions. Any user who assumes the role after you chose Revoke active sessions is not affected.
- When you apply a new policy to a user or a resource, it may take a few minutes for policy updates to take effect.
VPC Traffic Mirroring can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to:
- Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools.
- Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed.
- Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth.
- Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications.
- You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. You can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection.
- You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, C5n, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and z1d as I write this).
- Let’s review the key elements of VPC Traffic Mirroring and then set it up:
- Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.
- Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.
- Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.
- Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.
- You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works.
- By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files.
- Enabling server-side encryption encrypts the log files but not the digest files with SSE-KMS. Digest files are encrypted with Amazon S3-managed encryption keys (SSE-S3).
- If you are using an existing S3 bucket with an S3 Bucket Key, CloudTrail must be allowed permission in the key policy to use the AWS KMS actions GenerateDataKey and DescribeKey. If cloudtrail.amazonaws.com is not granted those permissions in the key policy, you cannot create or update a trail.
- The CMK that you choose must be created in the same AWS Region as the Amazon S3 bucket that receives your log files. For example, if the log files will be stored in a bucket in the US East (Ohio) Region, you must create or choose a CMK that was created in that Region.
- This approach has the following advantages:
- You can create and manage the CMK encryption keys yourself.
- You can use a single CMK to encrypt and decrypt log files for multiple accounts across all regions.
- You have control over who can use your key for encrypting and decrypting CloudTrail log files. You can assign permissions for the key to the users in your organization according to your requirements.
- You have enhanced security. With this feature, to read log files, the following permissions are required:
- A user must have S3 read permissions for the bucket that contains the log files.
- A user must also have a policy or role applied that allows decrypt permissions by the CMK policy.
- Because S3 automatically decrypts the log files for requests from users authorized to use the CMK, SSE-KMS encryption for CloudTrail log files is backward-compatible with applications that read CloudTrail log data.
- If you create a CMK in the CloudTrail console, CloudTrail adds the required CMK policy sections for you.
- if you create a key in the IAM console or AWS CLI then
- After creation of CMK, add policy sections to the key that enable CloudTrail to encrypt and users to decrypt log files.
- Enable CloudTrail to describe CMK properties.
- Update your trail to use the CMK whose policy you modified for CloudTrail. Be sure to give decrypt permission to all users in previous step else without decrypt permission user won't be able to read log files
Granting encrypt permissions Example: Allow CloudTrail to encrypt logs on behalf of specific accounts
- The following example policy statement illustrates how another account can use your CMK to encrypt CloudTrail logs.
- Scenario
- Your CMK is in account 111111111111.
- Both you and account 222222222222 will encrypt logs.
- In the policy, you add one or more accounts that will encrypt with your key to the CloudTrail EncryptionContext. This restricts CloudTrail to using your key to encrypt logs only for those accounts that you specify. Giving the root of account 222222222222 permission to encrypt logs delegates the administrator of that account to allocate encrypt permissions as required to other users in account 222222222222 by changing their IAM user policies.
- CMK policy statement:
{
  "Sid": "Enable CloudTrail Encrypt Permissions",
  "Effect": "Allow",
  "Principal": {
    "Service": "cloudtrail.amazonaws.com"
  },
  "Action": "kms:GenerateDataKey*",
  "Resource": "*",
  "Condition": {
    "StringLike": {
      "kms:EncryptionContext:aws:cloudtrail:arn": [
        "arn:aws:cloudtrail:*:111111111111:trail/*",
        "arn:aws:cloudtrail:*:222222222222:trail/*"
      ]
    }
  }
}
Before you add your CMK to your CloudTrail configuration, it is important to give decrypt permissions to all users who require them. Users who have encrypt permissions but no decrypt permissions will not be able to read encrypted logs. If you are using an existing S3 bucket with an S3 Bucket Key, kms:Decrypt permissions are required to create or update a trail with SSE-KMS encryption enabled.
- Example Scenario 1
- Your CMK, S3 bucket, and IAM user Bob are in account 111111111111.
- You give IAM user Bob permission to decrypt CloudTrail logs in the S3 bucket. In the key policy, you enable CloudTrail log decrypt permissions for IAM user Bob. CMK policy statement:
{
  "Sid": "Enable CloudTrail log decrypt permissions",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:user/Bob"
  },
  "Action": "kms:Decrypt",
  "Resource": "*",
  "Condition": {
    "Null": {
      "kms:EncryptionContext:aws:cloudtrail:arn": "false"
    }
  }
}
- Example Scenario 2
- Your CMK is in account 111111111111.
- The IAM user Alice and S3 bucket are in account 222222222222.
In this case, you give CloudTrail permission to decrypt logs under account 222222222222, and you give Alice's IAM user policy permission to use your key KeyA, which is in account 111111111111.
CMK policy statement:
{
  "Sid": "Enable encrypted CloudTrail log read access",
  "Effect": "Allow",
  "Principal": {
    "AWS": [
      "arn:aws:iam::222222222222:root"
    ]
  },
  "Action": "kms:Decrypt",
  "Resource": "*",
  "Condition": {
    "Null": {
      "kms:EncryptionContext:aws:cloudtrail:arn": "false"
    }
  }
}
Alice's IAM user policy statement:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "kms:Decrypt",
      "Resource": "arn:aws:kms:us-west-2:111111111111:key/keyA"
    }
  ]
}
- Example Scenario 3
- Your CMK and S3 bucket are in account 111111111111.
- The user who will read logs from your bucket is in account 222222222222.
To enable this scenario, you enable decrypt permissions for the IAM role CloudTrailReadRole in your account, and then give the other account permission to assume that role.
CMK policy statement:
{
  "Sid": "Enable encrypted CloudTrail log read access",
  "Effect": "Allow",
  "Principal": {
    "AWS": [
      "arn:aws:iam::11111111111:role/CloudTrailReadRole"
    ]
  },
  "Action": "kms:Decrypt",
  "Resource": "*",
  "Condition": {
    "Null": {
      "kms:EncryptionContext:aws:cloudtrail:arn": "false"
    }
  }
}
CloudTrailReadRole trust entity policy statement:
{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::222222222222:root"
     },
     "Action": "sts:AssumeRole"
    }
  ]
 }
CloudTrail requires the ability to describe the properties of the CMK. To enable this functionality, add the following required statement as is to your CMK policy. This statement does not grant CloudTrail any permissions beyond the other permissions that you specify.
{
  "Sid": "Allow CloudTrail access",
  "Effect": "Allow",
  "Principal": {
    "Service": "cloudtrail.amazonaws.com"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*"
}
By default, Amazon S3 buckets and objects are private. Only the resource owner (the AWS account that created the bucket) can access the bucket and objects it contains. The resource owner can grant access permissions to other resources and users by writing an access policy. To deliver log files to an S3 bucket, CloudTrail must have the required permissions, and it cannot be configured as a Requester Pays bucket. CloudTrail automatically attaches the required permissions to a bucket when you create an Amazon S3 bucket as part of creating or updating a trail in the CloudTrail console.CloudTrail adds the following fields in the policy for you:
- The allowed SIDs.
- The bucket name.
- The service principal name for CloudTrail.
- The name of the folder where the log files are stored, including the bucket name, a prefix (if you specified one), and your AWS account ID.
- The following s3 bucket policy allows CloudTrail to write log files to the bucket from supported regions.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {"Service": "cloudtrail.amazonaws.com"},
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::myBucketName"
        },
        {
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {"Service": "cloudtrail.amazonaws.com"},
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::myBucketName/[optional prefix]/AWSLogs/myAccountID/*",
            "Condition": {"StringEquals": {"s3:x-amz-acl": "bucket-owner-full-control"}}
        }
    ]
}
- If you specified an existing S3 bucket as the storage location for log file delivery, you must attach a policy to the bucket that allows CloudTrail to write to the bucket.
- For a bucket to receive log files from multiple accounts, its bucket policy must grant CloudTrail permission to write log files from all the accounts you specify. This means that you must modify the bucket policy on your destination bucket to grant CloudTrail permission to write log files from each specified account. Example Policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AWSCloudTrailAclCheck20131101",
      "Effect": "Allow",
      "Principal": {
        "Service": "cloudtrail.amazonaws.com"
      },
      "Action": "s3:GetBucketAcl",
      "Resource": "arn:aws:s3:::myBucketName"
    },
    {
      "Sid": "AWSCloudTrailWrite20131101",
      "Effect": "Allow",
      "Principal": {
        "Service": "cloudtrail.amazonaws.com"
      },
      "Action": "s3:PutObject",
      "Resource": [
        "arn:aws:s3:::myBucketName/[optional] myLogFilePrefix/AWSLogs/111111111111/*",
        "arn:aws:s3:::myBucketName/[optional] myLogFilePrefix/AWSLogs/222222222222/*"
      ],
      "Condition": { 
        "StringEquals": { 
          "s3:x-amz-acl": "bucket-owner-full-control" 
        }
      }
    }
  ]
}
- When you create a new bucket as part of creating or updating a trail, CloudTrail attaches the required permissions to your bucket. The bucket policy uses the service principal name, "cloudtrail.amazonaws.com", which allows CloudTrail to deliver logs for all regions. If CloudTrail is not delivering logs for a region, it's possible that your bucket has an older policy that specifies CloudTrail account IDs for each region. This policy gives CloudTrail permission to deliver logs only for the regions specified. As a best practice, update the policy to use a permission with the CloudTrail service principal. To do this, replace the account ID ARNs with the service principal name: "cloudtrail.amazonaws.com". This gives CloudTrail permission to deliver logs for current and new regions.
- If you try to add, modify, or remove a log file prefix for an S3 bucket that receives logs from a trail, you may see the error: There is a problem with the bucket policy.A bucket policy with an incorrect prefix can prevent your trail from delivering logs to the bucket. To resolve this issue, use the Amazon S3 console to update the prefix in the bucket policy, and then use the CloudTrail console to specify the same prefix for the bucket in the trail.
- If you have created an organization in AWS Organizations, you can create a trail that will log all events for all AWS accounts in that organization. This is sometimes referred to as an organization trail. You can also choose to edit an existing trail in the management account and apply it to an organization, making it an organization trail. Organization trails log events for the management account and all member accounts in the organization.
- You must be logged in with the management account for the organization to create an organization trail. You must also have sufficient permissions for the IAM user or role in the management account to successfully create an organization trail. If you do not have sufficient permissions, you cannot see the option to apply a trail to an organization.
- When you create an organization trail, a trail with the name that you give it will be created in every AWS account that belongs to your organization. Users with CloudTrail permissions in member accounts will be able to see this trail when they log into the AWS CloudTrail console from their AWS accounts, or when they run AWS CLI commands such as describe-trail. However, users in member accounts will not have sufficient permissions to delete the organization trail, turn logging on or off, change what types of events are logged, or otherwise alter the organization trail in any way.
- When you create an organization trail in the console, or when you enable CloudTrail as a trusted service in the Organizations, this creates a service-linked role to perform logging tasks in your organization's member accounts. This role is named AWSServiceRoleForCloudTrail, and is required for CloudTrail to successfully log events for an organization. If an AWS account is added to an organization, the organization trail and service-linked role will be added to that AWS account, and logging will begin for that account automatically in the organization trail. If an AWS account is removed from an organization, the organization trail and service-linked role will be deleted from the AWS account that is no longer part of the organization. However, log files for that removed account created prior to the account's removal will still remain in the Amazon S3 bucket where log files are stored for the trail.
- Organization trails are similar to regular trails in many ways. You can create multiple trails for your organization, and choose whether to create an organization trail in all regions or a single region, and what kinds of events you want logged in your organization trail, just as in any other trail. However, there are some differences. For example, when you create a trail in the console and choose whether to log data events for Amazon S3 buckets or AWS Lambda functions, the only resources listed in the CloudTrail console are those for the management account, but you can add the ARNs for resources in member accounts. Data events for specified member account resources will be logged without having to manually configure cross-account access to those resources.
- In the console, you create a trail that logs all regions. This is a recommended best practice; logging activity in all regions helps you keep your AWS environment more secure. To create a single-region trail, use the AWS CLI.
- Organization trails make the most recent 90 days of management events visible in Event history the same way that individual accounts do. When you view events in Event history for an organization in AWS Organizations, you can view the events only for the AWS account with which you are signed in. For example, if you are signed in with the organization management account, Event history shows the last 90 days of management events for the management account. Organization member account events are not shown in Event history for the management account. To view member account events in Event history, sign in with the member account.
You must specify an Amazon S3 bucket to receive the log files for an organization trail. This bucket must have a policy that allows CloudTrail to put the log files for the organization into the bucket.
- The following is an example policy for an Amazon S3 bucket named my-organization-bucket. This bucket is in an AWS account with the ID 111111111111, which is the management account for an organization with the ID o-exampleorgid that allows logging for an organization trail. It also allows logging for the 111111111111 account in the event that the trail is changed from an organization trail to a trail for that account only.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AWSCloudTrailAclCheck20150319",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "cloudtrail.amazonaws.com"
                ]
            },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::my-organization-bucket"
        },
        {
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "cloudtrail.amazonaws.com"
                ]
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-organization-bucket/AWSLogs/111111111111/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        },
        {
            "Sid": "AWSCloudTrailWrite20150319",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "cloudtrail.amazonaws.com"
                ]
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-organization-bucket/AWSLogs/o-exampleorgid/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control"
                }
            }
        }
    ]
}
This example policy does not allow any users from member accounts to access the log files created for the organization. By default, organization log files are accessible only to the management account.
Recently while working with one of our clients, we ran into an issue where an IAM user (we’ll call him John) with full EC2 permissions could not start an EC2 instance after it was stopped for a maintenance task. When starting the instance, the instance state would change to “Pending,” but after a few seconds it would switch back to “Stopped.”
Upon further inspection, we discovered that the instance had attached EBS volumes that were encrypted using a custom Customer Managed Key (CMK). When these encrypted volumes were detached, John could start the EC2 instances without any issues. This wasn’t what we wanted to happen because we needed these volumes for our instance. To find the root of the problem. Let’s take a look at how EBS and EC2 use encryption. 
From the AWS documentation:
- When you create an encrypted EBS volume, Amazon EBS sends a GenerateDataKeyWithoutPlaintext request to AWS KMS, specifying the CMK that you chose for EBS volume encryption.
- AWS KMS generates a new data key, encrypts it under the specified CMK, and then sends the encrypted data key to Amazon EBS to store with the volume metadata.
- EC2 sends the encrypted data key to AWS KMS with a Decrypt request.
- AWS KMS decrypts the encrypted data key and then sends the decrypted (plaintext) data key to Amazon EC2.
- Amazon EC2 uses the plaintext data key in hypervisor memory to encrypt disk I/O to the EBS volume. The data key persists in memory as long as the EBS volume is attached to the EC2 instance.
The problem we’re facing deals with steps 3 through 5. Although John has full access to EC2 resources, he does not have permissions to allow EC2 access the CMK in KMS. This prevents EC2 from retrieving a decryption key for the EBS volume(s). 
 Solution:
 
- kms:CreateGrant. This permission allows grants to be added to CMKs. Grants are an alternative way to specify permissions rather than key policies. A grant specifies who can use the CMK and under what conditions. For example, allow IAM user John to encrypt and decrypt with the key, or allow the EC2 service to decrypt with the specified key. Having permissions to create KMS grants is essentially like having the power to give anyone kms:* privileges for a key. At this point you’re probably saying, “Wait! Hold up a minute. You mean to tell me that the kms:CreateGrant permission allows someone to give out any permissions they want to my key?” In short, yes. However, let’s continue with the principle of least privilege. We must ensure that grants are only given to AWS services that require KMS permissions. This is done with the kms:GrantIsForAWSResource condition in your policy. When the kms:GrantIsForAWSResource condition is set to true, the IAM user or resource with the kms:CreateGrant permission can only create grants to AWS services integrated with AWS KMS, not to IAM users. If you’re setting a policy on your IAM user, you could use a policy similar to the following (using your CMK’s ARN in the Resources section):
{
     "Version": "2012-10-17",
     "Statement": [
          {
               "Effect": "Allow",
               "Action": [
                    "kms:CreateGrant"
               ],
               "Resource": [
                    "arn:aws:kms:<region>:<account #>:key/<key id>"
               ],
               "Condition": {
                    "Bool": {
                         "kms:GrantIsForAWSResource": true
                    }
               }
          }
     ]
}