knowledge base
I set up replication between my Amazon Simple Storage Service (Amazon S3) general purpose buckets. However, the objects don’t replicate to the destination bucket that’s in the same AWS Region or a different Region.
Short description
Note: You can use Amazon S3 replication only for general purpose buckets. You can’t use replication for directory buckets and table buckets.
To troubleshoot Amazon S3 objects that don’t replicate for cross-Region replication (CRR) or same-Region replication (SRR), check your destination bucket permissions. Also, check the public access settings and bucket ownership settings.
After you resolve the issues that caused the replication to fail, there might be objects in the source bucket that still don’t replicate. By default, Amazon S3 replication doesn’t replicate existing objects or objects with a FAILED or REPLICA replication status. To check the replication status of objects, see How do I view objects that failed replication from one Amazon S3 bucket to another? For these objects, use S3 Batch Replication.
Resolution
Identify replication setup issues
Upload an object to the source bucket to test the replication after each configuration change. It’s a best practice to change one configuration at a time to identify any replication setup issues.
Also, activate the s3:Replication:OperationFailedReplication event type notification to determine the cause of the failure.
Grant the minimum Amazon S3 permissions
Confirm that the AWS Identity Access Management (IAM) role that you used in the replication rule has the correct permissions. If the source and destination buckets are in different AWS accounts, then confirm that the destination account’s bucket policy grants permissions to the replication role. The following example IAM policy has the minimum required permissions for replication:
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::SourceBucket"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging"
],
"Resource": [
"arn:aws:s3:::SourceBucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}
]
}Note: Replace SourceBucket with your source bucket and DestinationBucket with your destination bucket.
Based on the replication rule options, you might need to grant additional permissions.
The IAM role must have a trust policy that allows Amazon S3 to assume the role to replicate objects. Example trust policy:
{ "Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}Grant additional Amazon S3 permissions
If you set the replication rule to Change object ownership to the destination bucket owner, then you must configure additional permissions.
Note: If the destination bucket’s object ownership is Bucket owner enforced, then you don’t need Change object ownership to the destination bucket owner in the replication rule. The change occurs by default.
To grant the IAM role the s3:ObjectOwnerOverrideToBucketOwner permissions, add the following permission to the Amazon S3 object policy:
{ "Effect": "Allow",
"Action": [
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}Note: Replace DestinationBucket with your destination bucket.
Also, add the following s3:ObjectOwnerOverrideToBucketOwner permission in the bucket policy for the destination account:
{ "Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"
},
"Action": [
"s3:ObjectOwnerOverrideToBucketOwner"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}Note: Replace SourceBucket-account-ID with the source bucket account, source-account-IAM-role with the source account IAM role, and DestinationBucket with the destination bucket.
If you activated delete marker replication on the replication rule, then the IAM role must have the following s3:ReplicateDelete permissions:
{ "Effect": "Allow",
"Action": [
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
}Note: Replace DestinationBucket with your destination bucket.
If the destination bucket is in another account, then the destination bucket owner must also add the following permission in the bucket policy:
{ "Version": "2012-10-17",
"Id": "PolicyForDestinationBucket",
"Statement": [
{
"Sid": "Permissions on objects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"
},
"Action": [
"s3:ReplicateObject",
"s3:ReplicateTags",
"s3:ObjectOwnerOverrideToBucketOwner",
"s3:ReplicateDelete"
],
"Resource": "arn:aws:s3:::DestinationBucket/*"
},
{
"Sid": "Permissions on bucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"
},
"Action": [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning"
],
"Resource": "arn:aws:s3:::DestinationBucket"
}
]
}Note: Replace arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role with the Amazon Resource Name (ARN) of your replication role and DestinationBucket with your destination bucket.
Grant AWS KMS permissions
If you encrypted the bucket’s source objects with an AWS Key Management Service (AWS KMS) key, then the replication rule must include AWS KMS encryption.
To configure the required permissions, complete the following steps:
- Open the Amazon S3 console.
- Choose the source bucket.
- Choose the Management tab, and then under Replication rules choose the replication rule.
- Choose Edit.
- Under Encryption, choose Replicate objects encrypted with AWS KMS.
- Under AWS KMS key for encrypting destination objects, select an AWS KMS key. The default option is to use the AWS KMS key (aws/S3).
For example replication policies, see Example policies – Using SSE-S3 and SSE-KMS with replication.
Note: If the destination bucket is in a different account, then specify an AWS KMS customer managed key that the destination account owns. The default aws/S3 key encrypts the objects with the AWS managed key that the source account owns. However, you can’t share the AWS managed key with another account.
Grant additional AWS KMS permissions for cross-account scenarios
To use the destination account’s AWS KMS key to encrypt the destination objects, the destination account must allow the replication role in the key policy. Example policy:
{ "Sid": "AllowS3ReplicationSourceRoleToUseTheKey",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"
},
"Action": [
"kms:GenerateDataKey",
"kms:Encrypt"
],
"Resource": "*"
}When I try to connect to my Amazon Elastic Compute Cloud (Amazon EC2) instance, I get an error.
Resolution
Prerequisites:
- Verify that the security group that’s attached to your instance allows access to port 22 for Linux and port 3389 for Windows.
- Verify that your network access control list (network ACL) allows access to the instance.
- Verify that your route table has a route for the connection.
Troubleshooting
Check that your EC2 instance passes status checks. For more information, see the following resources:
- Why is my EC2 Linux instance unreachable and failing its status checks?
- Why is my EC2 Windows instance down with an instance status check failure?
- Why is my EC2 Windows instance down with a system status check failure or status check 0/2?
If your instance passes status checks and you get connection errors, then see the following resources:
- How can I troubleshoot connecting to my Amazon EC2 Linux instance using SSH?
- How do I troubleshoot RDP connection issues with my Amazon EC2 Windows instance?
- How do I troubleshoot authentication errors when I use RDP to connect to an EC2 Windows instance?
- Troubleshoot issues connecting to your Amazon EC2 Linux instance
- Troubleshoot issues connecting to your Amazon EC2 Windows instance
Related information
I have tha solution of this why ec2 instance is gives this error. You can simply go to the directory where your .Pem file is stored. and run command “chmod 400 <pemfile>” by giving these permission you can able to connect to your ec2 by ssh.
Another point to add regarding issues with EC2 Instance Connect via AWS Management Console.
If you receive the following error when trying to connect to the instance:
Then it is likely that your Security Group was not properly configured.
EC2 Instance Connect uses specific IP address ranges for browser-based SSH connections to your instance (when users use the Amazon EC2 console to connect to an instance). If your users will use the Amazon EC2 console to connect to an instance, ensure that the security group associated with your instance allows inbound SSH traffic from the IP address range for EC2_INSTANCE_CONNECT. To identify the address range, download the JSON file provided by AWS and filter for the subset for EC2 Instance Connect, using EC2_INSTANCE_CONNECT as the service value. These IP address ranges differ between AWS Regions. For more information about downloading the JSON file and filtering by service, see AWS IP address ranges in the Amazon VPC User Guide.
To look for AWS IP address ranges for each service in each region, use the following JSON file available at https://ip-ranges.amazonaws.com/ip-ranges.json
For example, IP address range for the EC2 instance connect service at the us-east-1 region is:
Thank you for your comment. We’ll review and update the Knowledge Center article as needed.
Is there a youTube video or a training video that help with this? I am new to the entire AWS environment and it is not very newbie friendly with the navigation and menus. Been trying all the different options people recommended and now I think my account is a mess.
you can learn the process from following youtube link https://www.youtube.com/watch?v=rtG8S5WsSHg&t=26s
if you are new aws I would suggest attending the course from https://www.udemy.com/ .
for ec2 issue, Please check port 22 from security group
Hi, What’s your error? Have you configured your SSH connections ?
- Ensure that the security group attached to your instance allows access to port 22 for Linux and port 3389 for Windows.
- Verify that your network access control list (network ACL) permits access to the instance.
- Confirm that your route table has a route for the connection these are some solutions i can recommend you this video will help you https://www.youtube.com/watch?v=rtG8S5WsSHg&t=26s
Solution: 1. Ensure the security group associated with your EC2 instance allows incoming connections on the required port. SSH (default port 22) for Linux instances. RDP (default port 3389) for Windows instances. Solution:2. ** IAM or AWS Account Issues** Ensure your IAM permissions allow you to manage and access the EC2 instance. Check for potential restrictions, such as AWS Organization Service Control Policies (SCPs). Solution:3.Confirm you’re using the correct key pair file (.pem) associated with the EC2 instance. Ensure the key file has proper permissions: chmod 400 your-key.pem
If you are facing an EC2 connecting issue follow the steps to troubleshoot the issue>
- Ensure the ec2 subnet has internet gateway access in the route table associated with the subnet.
- Check if the security group has an inbound rule for Linux 22 port for Windows 3389 port enable to 0.0.0.0/0 or any specific IP
- Please make sure you have created ec2 in the public subnet
- Kindly Check your .pem file permission if you get this type of error.
ssh -i my-apiserver.pem ubuntu@34.70.165.154 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for 'my-apiserver.pem' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. Load key "my-apiserver.pem": bad permissions (ubuntu@34.70.165.154) Password:
if you see the error like above you need to give sudo chmod 400 my-apiserver.pem permission.
Kindly make sure this option is selected while creating an EC2 instance.
Auto-assign public IP Enable
I want to regain access to my suspended account and AWS services.
Short description
To reinstate your account that’s suspended because of outstanding charges, pay all past due charges in the AWS Billing and Cost Management console.
Important:
- AWS closes your account if you don’t reinstate the account within 30 days of suspension.
- AWS terminates your account if you don’t reinstate your account within 90 days of closure.
- AWS can delete resources on a suspended account.
- Your account permanently closes in 90 days, after which you won’t be able to reopen your account and AWS will delete the content remaining in your account. To reopen your account before your account is permanently closed you must contact AWS Support as soon as possible. Also, within 60 days from the date of account closure, any outstanding balances must be fully paid, including providing any information specified on the invoice.
Note:
- If you closed your AWS account within the past 90 days and you want to reopen it, see Can I reopen my closed AWS account?
- If the suspended account is a member account in an organization, then contact the owner of the management account.
- You can’t delete resources on a suspended account.
- AWS Support can’t delete resources on your AWS account on your behalf.
Resolution
Reinstate your account
To pay the past due charges on an account, first verify that your current payment information is accurate:
- Check Payment methods to confirm that the information associated with your payment method is correct.
- If your default payment method is no longer valid, add a new payment method, and then set it as the default payment method.
Then, follow these steps to pay your outstanding charges:
- Open the Billing and Cost Management console.
- On the navigation pane, choose Payments.
You can view your outstanding invoices on the Payments Due tab. - On the Payments Due tab, select the invoice that you want to pay, and then choose Complete payment.
- On the Complete a payment page, confirm that the summary matches what you want to pay, and then choose Verify and Pay.
Reactivate your account
Complete the following steps:
- If you paid your past due charges in full with a credit card, then services automatically reactivate within a few minutes.
- If you paid your past due charges in full with a different payment method, then contact AWS Support to reactivate your account.
Note: Sometimes account services can take up to 24 hours to reactivate an account. If you have paid your past due charges in full and your account isn’t reactivated within 24 hours, then contact AWS Support.
Troubleshoot access issues
If your account was suspended by AWS, then you might need to provide additional information so AWS can review your reinstatement request. Check your email and spam folder to see if AWS needs any information from you to complete the reactivation process. Then, respond with the requested information and your account is reviewed for reinstatement.
If you have additional questions, or can’t provide the requested information, then contact AWS Support.
Resolution
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you’re using the most recent AWS CLI version.
Troubleshoot resources that don’t follow a dynamic scaling policy
Make sure that you correctly configured your CloudWatch alarm
Create an Amazon CloudWatch alarm for the correct metric based on the AWS service that you use. You must also activate the alarm action for the CloudWatch alarm that you associated with your scaling policy.
To check your CloudWatch alarm configuration, run the following describe-alarms AWS CLI command:
aws cloudwatch describe-alarms --alarm-names example-alarmNote: Replace example-alarm with your alarm name.
In the output, check whether ActionsEnabled is true. Also, check the Namespace, MetricName, and Dimensions values to make sure that you configured the alarm for the correct metric.
Or, complete the following steps to check your CloudWatch alarm configuration in the CloudWatch console:
- Open the CloudWatch console.
- In the navigation pane, choose Alarms, and then choose All alarms.
- Make sure that Hide Auto Scaling alarms is deactivated.
- Under Details, make sure that the alarm uses the correct metric.
- Choose the Actions tab, and then check for the Actions disabled note.
To activate an alarm action, run the following enable-alarm-actions command:
aws cloudwatch enable-alarm-actions --alarm-names example-alarmNote: Replace example-alarm with your alarm name.
To create an alarm for the correct metric, see Create an alarm that invokes a scaling policy or Create a target tracking scaling policy. For a list of example metrics that you can create alarms for, see CloudWatch metrics for monitoring resource usage.
Make sure that you activated scaling activities for your target
Check whether your target’s scaling activities are suspended. If they’re suspended, then resume them.
To review your scaling activities, run the following describe-scaling-activities command:
aws application-autoscaling describe-scaling-activities --include-not-scaled-activities --service-namespace example-service-namespace --scalable-dimension example-scalable-dimension --resource-id example-resource-idNote: Replace example-service-namespace with the namespace of the AWS service that provides the resource, example-scalable-dimension with the scalable dimension, and example-resource-id with the resource ID.
In the output, check the NotScaledReasons value. If it’s AlreadyAtMaxCapacity, then your scalable target already reached its maximum capacity. If it’s AlreadyAtDesiredCapacity, then the scaling policy might not scale even if you activated your CloudWatch alarm. For more information, see Reason codes.
Note: If scaling didn’t happen multiple times for the same reason code, then check the previous NotScaledReasons value. The output doesn’t show duplicate values.
Check for active scaling activities that occur during step scaling policy cooldown periods or target tracking scaling policy cooldown periods. Application Auto Scaling doesn’t scale in during cooldown periods. If a scale out is larger than a scaling activity that’s in progress or in cooldown, then Application Auto Scaling scales out during the activity.
Make sure that you don’t have competing scaling policies
To review your scaling policies, run the following describe-scaling-policies command:
aws application-autoscaling describe-scaling-policies --service-namespace example-service-namespace --scalable-dimension example-scalable-dimension --resource-id example-resource-idNote: Replace example-service-namespace with the namespace of the AWS service that provides the resource, example-scalable-dimension with the scalable dimension, and example-resource-id with the resource ID.
In the output, check whether you have multiple policies. If two policies activate at the same time, then Application Auto Scaling uses the policy with the larger effect. For example, if one policy adds two resources and another adds five, then Application Auto Scaling adds five resources. You can’t use the describe-scaling-activities command to check for multiple policies.
If you use a target tracking scaling policy, then also make sure that the DisableScaleIn parameter is False to activate scale in. By default, the parameter value is False.
If DisableScaleIn is True, then update the scaling policy JSON file to set DisableScaleIn to False. Then, run the following put-scaling-policy command to apply your changes:
aws application-autoscaling put-scaling-policy \
--service-namespace example-service-namespace --scalable-dimension example-scalable-dimension --resource-id example-resource-id --policy-name example-name --policy-type TargetTrackingScaling --target-tracking-scaling-policy-configuration file://example.jsonNote: Replace example-service-namespace with the namespace of the AWS service that provides the resource, example-scalable-dimension with the scalable dimension, and example-resource-id with the resource ID. Also, replace example-name with the policy name and example.json with the scaling policy JSON file.
Troubleshoot a scalable target that doesn’t respond to a scheduled action
Check whether you configured a time zone for the scheduled action. By default, scheduled actions are in the UTC+0 time zone. If you set a time zone, then run the following describe-scheduled-actions command to verify that the action runs based on that time zone:
aws application-autoscaling describe-scheduled-actions \
--service-namespace example-service-namespace --resource-id example-resource-id --scheduled-action-names example-nameNote: Replace example-namespace with the namespace of the AWS service that provides the resource, example-resource-id with the resource ID, and example-name with the action name.
Also, check the output of the preceding command to identify whether you specified a StartTime. If you specify a StartTime value, then Application Auto Scaling activates the scheduled action at that time first. After the specified start time, Application Auto Scaling activates the scheduled activity based on the cron or rate expression that you specify.
To review the scalable target’s activity history for scaling activities that conflict with your scheduled action, run the following describe-scaling-activities command:
aws application-autoscaling describe-scaling-activities --include-not-scaled-activities --service-namespace example-service-namespace --scalable-dimension example-scalable-dimension --resource-id example-resource-idNote: Replace example-service-namespace with the namespace of the AWS service that provides the resource, example-scalable-dimension with the scalable dimension, and example-resource-id with the resource’s identifier.

