For production you probably have a domain name in place. For dev, sandbox, playgrounds or learning environments, you would like to have a domain name as well. You could register an individual domain name for everyone, but that’s both a waste of domain names, and waste of your money. Sandboxes should also be easy to earase, an registered domain name avoids that. In this blog post I’ll describe how to set up a central domain name for your group of users, and create subdomains for everyone.

Architecture High Level Overview

Why?

Why do we need a DNS record in the first place? Here some examples:

  • Bastion Host could be bastion.sandbox.domain.com instead of ec2-234-34-2-56.compute-1.eu-west-1.amazonaws.com
  • CloudFront could be web.sandbox.domain.com instead of d111111abcdef8.cloudfront.net
  • API Gateway could be api.sandbox.domain.com instead of api-1222abcdef8.execute-api.eu-west-1.amazonaws.com
  • Certificate Manager and Simple E-mail Service (SES) are much easier when you can validate the domain ownership with Route 53, rather than you have to do verification with tens of manual steps.

Learning Objectives

  • Using Route 53 HostedZones & RecordSets
  • Using CloudFormation for Route 53, Lambda, SNS, etc
  • Using CloudFormation Custom Resources
  • Using Lambda Custom Resources, Cross-Account, with an SNS Topic and IAM permissions

Introduction

Before we start, make sure you’re working in the eu-west-1 or us-east-1 region. I’ve published the required resources in these regions only. I uploaded Lambda Functions and Lambda Layers in these public buckets for your convenience. To use other regions is not very trivial, you just have to create your own S3 buckets and build & upload the resources.

We have two accounts, a Central Account that contains the HostedZone for domain.com, and a Sandbox Account, where we will deploy a Route 53 Hosted Zone and an S3 Static Website using sandbox.domain.com.

This blog posts is set up in 3 steps:

  1. Step-by-step deployment of the custom resource in the Central Account, that allows the Sandbox Accounts to allow the creation of the required Name Servers Record Set for subdomains.
  2. Create a new Route 53 Hosted Zone for sandbox.domain.com in the Sandbox Account, and add the NameServers to the domain.com HostedZone in the Central Account using the custom resource provider deployed in step 1.
  3. Create an S3 bucket with an index file that holds our “Hello World” content, and an ALIAS record for the S3 Bucket.

Services

A quick introduction to the AWS services before I’ll jump into the code.

  • Route 53. This is the service where you can register a domain name and manage al its records. AWS calls it a HostedZone with a unique ID. The records it hold are called RecordSets. These are the DNS-records you might be familiar with: A, AAAA, MX, NS, TXT etc.
  • S3. To proof we have build a proper DNS service, I’ll create an S3 bucket and use it as a static website hosting solution. The BucketName must be exactly the domain name, otherwise it cannot be used. I’ll use sandbox.domain.com in this example, and show a “hello world”.
  • CloudFormation. With CloudFormation you can describe in a simple text file which services to deploy and how they are configured. Creating a HostedZone or S3 bucket is easy and natively supported. I want to deploy a RecordSet in a separate AWS account, and create an index.html file in the S3 bucket. Those features are not available in CloudFormation, so I will create CloudFormation Custom Resources.
  • Lambda. Lambda is a few lines of code, like Python or Node.js, that is triggered by an event. An event could be an HTTP request via API Gateway, or in our case when a CloudFormation Custom Resource is deployed. The Lambda Function is the Custom Resource Provider, and the result of an execution is called a Custom Resource.
  • SNS. Simple Notification Service is a messaging service. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including AWS Lambda functions.

Prerequisites

In order to test this on your own AWS accounts, you need to have a Route53 Hosted Zone with a configured public domain name. In this example, I’ll use domain.com, but if you have something like: sub.domain.com or sub.sub.bla.net, that works perfectly fine too.

You can work with two separate AWS accounts, or a single. I’ve tested with two AWS accounts. If you trust me it works cross-account, you can do everything in a single account, that saves you some time. I’ll keep two CloudFormation stacks to show the difference.

Step 1: Prepare the Central Account

In this step I’m going to deploy a Custom Resource Provider in the Central Account. It also includes an SNS Topic, that triggers the Lambda Function. As mentioned before, the Hosted Zone is already in place (domain.com / ZH0ST3DZ0N3). Also add a list of all the AWS Accounts that are authorized to send messages to your SNS Topic.

Let’s start with creating a new file central/template.yml. The HostedZoneId parameter is required. The AuthorizedAccounts can be left blank if you want to use the Pseudo AcountId (notice the condition and inline !If function later). That’s the AccountId where the stack gets deployed. When working cross-account, all the Account ID’s s of all sandbox accounts must be specified.

1
2
3
4
5
6
7
8
Parameters:
  HostedZoneId: 
    Type: String
  AuthorizedAccounts: 
    Type: CommaDelimitedList
    Default: ""
Conditions:
  UsePseudoAccountId: !Equals [!Select [0, !Ref AuthorizedAccounts], ""]

Now I’ll add the first 3 resources for the Custom Resource Provider. The Lambda Layer, Lambda Function and an IAM Role for Lambda, to allow the Lambda Function to update the Route 53 records.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
Resources:

  LambdaFunctionPythonBoto3RequestsLayer:
    Type: AWS::Lambda::LayerVersion
    Properties:
      CompatibleRuntimes:
        - python3.8
      Content:
        S3Bucket: !Sub 'htcr-${AWS::Region}'
        S3Key: "route53nscr/layer.zip"

  LambdaFunctionCreateRoute53RecordSet:
    Type: AWS::Lambda::Function
    Properties:
      Handler: index.lambda_handler
      Runtime: python3.8
      Timeout: 10
      Layers:
        - !Ref LambdaFunctionPythonBoto3RequestsLayer
      Environment:
        Variables:
          HOSTED_ZONE_ID: !Ref HostedZoneId
      Role: !GetAtt IAMRoleForLambdaFunctionCreateRoute53RecordSet.Arn
      Code:
       S3Bucket: !Sub 'htcr-${AWS::Region}'
       S3Key: 'route53nscr/lambda-custom-resource-ns-recordset.zip'

  IAMRoleForLambdaFunctionCreateRoute53RecordSet:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      Path: "/"
      ManagedPolicyArns: 
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Policies:
        - PolicyName: "Route53"
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - "route53:GetHostedZone"
                  - "route53:ChangeResourceRecordSets"
                Resource: !Sub "arn:aws:route53:::hostedzone/${HostedZoneId}"

There are multiple ways to deploy Lambda Function code with CloudFormation:

  • Inline. I could put all python code inline in your CloudFormation template. I can’t use that here, because we need to use modules like requests (import requests). Also, python3.8 does not support inline code anymore, and I want to work with python3.8.
  • Lambda Zip. I could also zip the whole lambda function, including my own modules and installed modules. This is sometimes refered to as a “fat lambda”. Downside is that you can’t edit the Lambda Function in the Management Console. During development/troubleshooting, this is a handy feature.
  • Lambda Layers. I could deploy a Lambda Layer, which is a zip file with all modules installed. I can reference this Lambda Layer in the your Lambda Function, and the LamdaFunction is kept small and editable in the console. This is the approach I’ll use for blog post.
  • SAM. I’m using native CloudFormation in this blog post. I could also use Transform, with resource types like: AWS::Serverless::Function and AWS::Serverless::LayerVersion. During deployment all files are uploaded to S3 and a new template is generated including references to the S3 bucket objects. A lot of things are abstracted for you, which is handy, but not in this blog post.

I’ve already uploaded the zip to a public S3 bucket. It contains the following Lambda Function code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
import json
import boto3
import sys
import os
import requests

def cfnsend(event, context, status, **kwargs):
    responseBody = {
      'Status': status,
      'Reason': kwargs.get('reason', ''),
      'StackId': event['StackId'],
      'RequestId': event['RequestId'],
      'PhysicalResourceId': kwargs.get('id', None),
      'LogicalResourceId': event['LogicalResourceId'],
      'NoEcho': kwargs.get('noEcho', False),
      'Data': kwargs.get('data', {})
    }
    json_responseBody = json.dumps(responseBody)
    headers = {
        'content-type' : '',
        'content-length' : str(len(json_responseBody))
    }
    requests.put(event['ResponseURL'],
                 data=json_responseBody,
                 headers=headers)


def lambda_handler(event, context):

  try:
    client = boto3.client('route53')
    hosted_zone_id = os.getenv('HOSTED_ZONE_ID')
  except:
    cfnsend(event, context, 'FAILED', 
            reason='Something early in the process went wrong.')

  try:
    # when present, strip the SNS message headers
    if 'Records' in event:
      event = json.loads(event['Records'][0]['Sns']['Message'])
      print("Lambda Event: " + json.dumps(event))

    type = event['RequestType']
    domain_name = event['ResourceProperties']['DomainName']
    name_servers = event['ResourceProperties']['NameServers']

    resource_records = []
    for record in name_servers:
      resource_records.append({"Value": record})
  except:
    cfnsend(event, context, 'FAILED', 
      reason='DomainName or NameServers not specified or event is malformed.')

  if type == 'Create' or type == 'Update':
    try:
      response = client.change_resource_record_sets(
        HostedZoneId=hosted_zone_id,
        ChangeBatch= {
          'Changes': [{
            'Action': 'UPSERT',
            'ResourceRecordSet': {
              'Name': domain_name,
              'Type': 'NS',
              'TTL': 300,
              'ResourceRecords': resource_records
            }
          }]
        }
      )
      print(response)
    except:
      pass
  elif type == 'Delete':
    try:
      response = client.change_resource_record_sets(
        HostedZoneId=hosted_zone_id,
        ChangeBatch= {
          'Changes': [{
            'Action': 'DELETE',
            'ResourceRecordSet': {
              'Name': domain_name,
              'Type': 'NS',
              'TTL': 300,
              'ResourceRecords': resource_records
            }
          }]
        }
      )
      print(response)
    except:
      pass

  cfnsend(event, context, 'SUCCESS', id=domain_name, reason='RecordSet '+type+'d')

Also the layer is uploaded to S3. To create a Lambda Layer for Python modules, I created a lambda_layer/requirements.txt with on line 1 boto3 and on line 2 requests. I used the following lambda layer package script, lambda_layer/deploy, that creates the zip file and upload it to S3.

1
2
3
4
5
6
7
#!/bin/bash
mkdir python
pip install -q -r requirements.txt -t ./python/
zip -r layer.zip ./python
rm -rf python
aws s3 cp --acl public-read layer.zip s3://yourbucket/layer.zip
rm layer.zip

Now is a good time to deploy the stack for the first time. I create a script central/deploy to make it easier to deploy. (chmod +x deploy). I execute this script in the root of my project folder, like this: ./central/deploy. The AuthorizedAccounts Parameter is optional, if you leave this one out or blank, the template will use the Pseudo AccountId.

1
2
3
4
5
6
7
#!/bin/bash
aws cloudformation deploy \
  --stack-name central \
  --template central/template.yml \
  --capabilities CAPABILITY_IAM \
  --parameter-overrides HostedZoneId=ZH0ST3DZ0N3 \
                        AuthorizedAccounts=111111222222,222222333333

When it is deployed succesfully, I want to add an SNS topic and put it in front of the Lambda Function. SNS Topics make it easier to expose the Lambda Function to a specific set of AWS Accounts.

I add 3 resources: The SNS Topic itself, that immediately subscribes the Lambda Function when a message is sent to the SNS Topic. A Topic Policy, to allow other AWS accounts to send messages to the SNS Topic. And a Topic Permission, to execute the Lambda Function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#Resources:

  # LambdaFunctionPythonBoto3RequestsLayer
  # LambdaFunctionCreateRoute53RecordSet
  # IAMRoleForLambdaFunctionCreateRoute53RecordSet

  SNSTopicCreateRoute53RecordSet: 
    Type: AWS::SNS::Topic
    Properties: 
      Subscription: 
        - Endpoint: !GetAtt LambdaFunctionCreateRoute53RecordSet.Arn
          Protocol: lambda
      TopicName: CreateRoute53RecordSet

  SNSTopicPolicyCreateRoute53RecordSet:
    Type: AWS::SNS::TopicPolicy
    Properties:
      Topics:
        - !Ref SNSTopicCreateRoute53RecordSet
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal: 
            AWS:
              !If
              - UsePseudoAccountId
              - !Sub "${AWS::AccountId}"
              - !Ref AuthorizedAccounts
          Action: 'sns:Publish'
          Resource: !Ref SNSTopicCreateRoute53RecordSet

  LambdaPermissionCreateRoute53RecordSet:
    Type: AWS::Lambda::Permission
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !Ref LambdaFunctionCreateRoute53RecordSet
      Principal: 'sns.amazonaws.com'
      SourceArn: !Ref SNSTopicCreateRoute53RecordSet

I deploy the updated central/template.yml. When this is done, we are also done with the Central Account. Everything is in place to build our sandbox environment.

Step 2: Create the Sandbox HostedZone

I create a sandbox/template.yml where I add two parameters. The first is DomainName, and contains something like “sandbox.domain.com”. I often see companies use people their names or initials, like: martijn.companysandboxes.net or mvd.heroes.bigcompany.com. The other is CentralAccountId. If you set this parameter, it will use the specified AccountId. If you leave it blank, CloudFormation will use the PseudoAccountId where the stack is deployed in (in case you’re deploying in a single Account). If you renamed the SNS Topic earlier, also update the ServiceToken in this template.

I immediately add two resources. The Hosted Zone for sandbox.domain.com, and the CloudFormation Custom Resource that adds the NameServers to the Hosted Zone (domain.com) in the Central Account. You can recognize a Custom Resource on the type, it starts with Custom:: followed by something you can choose for yourself. A Custom Resource requires a ServiceToken, which points to the Arn of the SNS Topic (or to a Lambda Function). The Lambda Function Provider I created earlier in the Central Account, requires two Properties: DomainName (sandbox.domain.com) and the NameServers AWS has selected for the HostedZone that is created in this Stack.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Parameters:
  DomainName:
    Type: String
  CentralAccountId:
    Type: String
    Default: ""

Conditions: 
  UsePseudoAccountId: !Equals [!Ref CentralAccountId, ""]

Resources:

  Route53HostedZone: 
    Type: AWS::Route53::HostedZone
    Properties:
      Name: !Ref DomainName

  CreateRoute53RecordSet:
    Type: Custom::NameServers
    Properties:
      ServiceToken:
        !If
        - UsePseudoAccountId
        - !Sub "arn:aws:sns:${AWS::Region}:${AWS::AccountId}:CreateRoute53RecordSet"
        - !Sub "arn:aws:sns:${AWS::Region}:${CentralAccountId}:CreateRoute53RecordSet"
      DomainName: !Ref DomainName
      NameServers: !GetAtt Route53HostedZone.NameServers

Now we have a Hosted Zone available in our Sandbox Account. We can use it for several services you have seen in the introduction section of this blog post.

Step 3: Add another Custom Resource, to create an S3 object

In step 4 I’m going to deploy the S3 Bucket, enable Web Hosting, add a file, and register an ALIAS record. There is no CloudFormation Resource available to create an S3 File, so again I need to create and use a Custom Resource Provider. This is what step 3 is all about.

Add to the sandbox/template.yml the following three resources. It creates a Lambda Layer, Lambda Function with permissions to create S3 Objects. I use the same approach for this Lambda Function as the previous one. So this one is also zipped and uploaded to the public S3 bucket.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#Resources:

  LambdaFunctionPythonBoto3RequestsLayer:
    Type: AWS::Lambda::LayerVersion
    Properties:
      CompatibleRuntimes:
        - python3.8
      Content:
        S3Bucket: !Sub "htcr-${AWS::Region}"
        S3Key: "route53nscr/layer.zip"

  LambdaFunctionCreateS3File:
    Type: AWS::Lambda::Function
    Properties:
      Handler: index.lambda_handler
      Runtime: python3.8
      Timeout: 11
      Role: !GetAtt IAMRoleForLambdaFunctionCreateS3File.Arn
      Layers:
        - !Ref LambdaFunctionPythonBoto3RequestsLayer
      Code:
        S3Bucket: !Sub "htcr-${AWS::Region}"
        S3Key: "route53nscr/lambda-custom-resource-s3-file.zip"

  IAMRoleForLambdaFunctionCreateS3File:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      ManagedPolicyArns: 
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Policies:
        - PolicyName: "S3"
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action: "s3:*"
                Resource: "*"

This is the Custom Resource Provider code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import json
import boto3
import sys
import os
import requests

def cfnsend(event, context, status, **kwargs):
    responseBody = {
      'Status': status,
      'Reason': kwargs.get('reason', ''),
      'StackId': event['StackId'],
      'RequestId': event['RequestId'],
      'PhysicalResourceId': kwargs.get('id', None),
      'LogicalResourceId': event['LogicalResourceId'],
      'NoEcho': kwargs.get('noEcho', False),
      'Data': kwargs.get('data', {})
    }
    json_responseBody = json.dumps(responseBody)
    headers = {
        'content-type' : '',
        'content-length' : str(len(json_responseBody))
    }
    requests.put(event['ResponseURL'],
                 data=json_responseBody,
                 headers=headers)

def lambda_handler(event, context):

  print("Original Event: " + json.dumps(event))

  try:
    client = boto3.client('s3')
    type = event['RequestType']
    bucket = event['ResourceProperties']['Bucket']
    key = event['ResourceProperties']['Key']
    content = event['ResourceProperties']['Content']
  except:
    cfnsend(event, context, 'FAILED', 
      reason='Bucket, Key or Content missing, event is malformed, or boto3 client issues')

  if type == 'Create' or type == 'Update':
    try:
      client.put_object(
        ACL='public-read',
        Bucket=bucket,
        Key=key,
        ContentType='text/html',
        Body=content.encode()
      )
    except:
      pass
  elif type == 'Delete':
    try:
      client.delete_object(
        Bucket=bucket,
        Key=key
      )
    except:
      pass
  cfnsend(event, context, 'SUCCESS', id=bucket+'/'+key, reason=type+' Done')

Step 4: The Bucket, The File and The RecordSet

To create a custom record for your S3 Bucket, you need to reference some AWS owned zones. They are listed here. I’ve added only two in the Mappings section of CloudFormation.

I create a S3 Bucket with Web Hosting enabled. Then I use the previously created Custom Resource Provider to create an index.html file in the S3 Bucket. It’s possible to use a Custom Resource Provider that is part of the same stack. Nice! Finally, I’ll add the RecordSet. The documentation for creating a RecordSet is HUGE! Of course there are a lots of different DNS-records possible. It took me almost an hour to figure out this configuration.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Mappings:
  RegionMap:
    eu-west-1:
      WebsiteEndpoint: 's3-website-eu-west-1.amazonaws.com'
      HostedZoneId: 'Z1BKCTXD74EZPE'
    us-east-1:
      WebsiteEndpoint: 's3-website-us-east-1.amazonaws.com'
      HostedZoneId: 'Z3AQBSTGFYJSTF'

#Resources:

  # Route53HostedZone
  # CreateRoute53RecordSet
  # LambdaFunctionPythonBoto3RequestsLayer
  # LambdaFunctionCreateS3File
  # IAMRoleForLambdaFunctionCreateS3File

  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref DomainName
      AccessControl: PublicRead
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html

  S3IndexFile:
    Type: Custom::IndexFile
    Properties:
      ServiceToken: !GetAtt LambdaFunctionCreateS3File.Arn
      Bucket: !Ref S3Bucket
      Key: "index.html"
      Content: "Hello World"

  Route53RecordSet: 
    Type: AWS::Route53::RecordSet
    Properties:
      HostedZoneId: !Ref Route53HostedZone
      Name: !Ref DomainName
      Type: A
      AliasTarget:
        DNSName: !FindInMap [RegionMap, !Ref 'AWS::Region', WebsiteEndpoint]
        HostedZoneId: !FindInMap [RegionMap, !Ref 'AWS::Region', HostedZoneId]

When I updated the sandbox/template.yml and deploy the template, I can now browse to “sandbox.domain.com” and I’ll see the Hello World!

Cleanup

First delete your sandbox stack and wait until it’s deleted. Then delete the central stack. The sandbox stack has a dependency on central, because of Custom Resource Provider created in step 1.

Conclusion

We have learned how to work with Route 53, Cross-Account Custom Resource Providers, and to use a working Hosted Zone with a public domain name. All this knowledge and code snippets can be used in your own setup. Hope you enjoyed the post, and happy to hear your feedback.

The python functions are error prone. When you already configured a domain name for another account, it will simply overwrite the record. Also when you try to create RecordSets that mismatch (i.e. sandbox.domain.com in a hosted zone sandboxes.net). To make it production ready it requires quite some refactoring. The function would be much longer and more difficult to read.

-Martijn

Photo by Jasmin Schreiber on Unsplash

Author
Martijn van Dongen
Cloud Evangelist / MCE
Martijn works at Schuberg Philis as Cloud Evangelist and Mission Critical Engineer (MCE). Martijn is an active AWS developer, architect, consultant, and trainer. He is chair of AWSug.nl and recognized as an AWS Community Hero.
View more
Next Blog
blog
Extending the AWS CLI with my own Python/Click based CLI
April 14, 2020 | Est. read time: 9 minutes
In this blog post you’ll learn how to extend the aws cli with some shortcuts, sane defaults, and your own mycli based on Python and Click, easily installed with pip.
Read