Here you’ll find a lambda function that you can schedule to run every so often, like once a day, to handle this for you. It will compare your AMIs in your main region to those that exist in your DR region and copy any that do not exist. There are a variety of options for doing this, but many solutions I’ve seen just leave any old AMIs in your DR region when they have been deleted from your main region, so eventually you have to go in and manually clean up the unused.
So the function will first look for any AMIs that exist in your DR region that do not exist in your primary region, and delete them along with the associated snapshot. So, if you delete an AMI in your main region, when the function runs it will delete it from your DR region. For this reason, the DR region must specifically be just for disaster recovery. If you have any custom AMIs in the region that do not exist in your primary region they will be deleted. The AMI name is used for comparison.
On to the good stuff. You’ll first need to create an IAM role with the following policy:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:*" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": "ec2:*", "Resource": "*" } ] } |
Next, create a python 3.6 lambda function with a role of the previously created IAM role and put in the following code. You’ll need to edit the top 3 variables, for your source and destination region, along with your account id.
Two things I want to stress. First is, make sure you have filled in your source and destination regions properly. If you get this backwards, then this will delete all of the AMIs in your source region, so check and then check again. Secondly, dAuo not use this if you have AMIs in your destination region that do not exist in your source region that you want to keep.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
def lambda_handler(event, context): import boto3 sourceRegion = 'us-west-2' destRegion = 'us-east-2' accountId = 'XXXXXXXXXXXX' sourceClient = boto3.client('ec2',sourceRegion) sourceImages = sourceClient.describe_images( Owners=[accountId] ) destClient = boto3.client('ec2',destRegion) destImages = destClient.describe_images( Owners=[accountId] ) # Make sure we have source images to work with if sourceImages: # Check if image needs to be removed from destination region for destImage in destImages['Images']: found = False for sourceImage in sourceImages['Images']: if (sourceImage['Name'] == destImage['Name']): found = True if (found == False): deleteKey = destImage['ImageId'] if ('Description' in destImage): deleteKey = destImage['Description'] print("Deleting Image {} from {}.".format(deleteKey, destRegion)) try: destClient.deregister_image(ImageId=destImage['ImageId']) for snapshot in destImage['BlockDeviceMappings']: destClient.delete_snapshot(SnapshotId=snapshot['Ebs']['SnapshotId'], DryRun=False) except client.exceptions.NoSuchEntityException: print("Image no longer exists") # Check if image needs to be copied to destination region for sourceImage in sourceImages['Images']: found = False for destImage in destImages['Images']: if (sourceImage['Name'] == destImage['Name']): found = True # Didn't find ami, copy it if (found == False): print("Copyimg Image {} to {}.".format(sourceImage['Description'], destRegion)) new_ami = destClient.copy_image( DryRun=False, SourceRegion=sourceRegion, SourceImageId=sourceImage['ImageId'], Name=sourceImage['Name'], Description=sourceImage['Description'] ) destClient.create_tags(Resources=[new_ami['ImageId']], Tags=[{'Key':"Name",'Value':sourceImage['Name']}]) |
Once you have checked and double checked your source and destination region, you can save and test. After a bit, you’ll see all of your AMIs being created in the specified destination region. Once that looks good you can schedule it to run once a day by using a scheduled cloudwatch event by following these steps:
- Go to Services, Lambda, click the unction name
- Click on Triggers and then on Add trigger
- Select Cloudwatch Events
- Create a new Rule.
- Schedule Expression: cron(0 0 * * ? *)
- Check Enable Trigger
- Click Submit