Extending the AWS CLI with my own Python/Click based CLI

April 14, 2020

For a few years, I’m working with my own CLI for AWS. The code was quite a mess. Recently, I decided to start from scratch. Giving you access to my highly opinionated commands and (in)sane defaults, doesn’t make sense. Tell you how to build your own CLI with some tips & tricks, that might be valuable. At the end of this blog post, you can do things like:

Creating S3 objects:

$ aws s3a make —-content-type 'text/html' s3://mybucket/myfile.html "Hello World"

Quickly view what is in an S3 file:

$ aws s3a cat s3://mybucket/myfile.txt
Hello World

Use default commands that are available in aws s3:

$ aws s3a ls
Bucket1
Bucket2
Bucket3

Make an alias for cloudformation and create a shortcut to list active stacks:

$ aws cfn list 
Stack1
Stack2
Stack3

And of course here also the defaults still work:

$ aws cfn list-stacks
{
  ...
}

Services

About the tools we’re using:

  • Command Line Interface. The AWS CLI makes it easier for you to automate infrastructure. Often it’s a combination of CloudFormation and CLI commands. While developing, these commands can consume a lot of time.
  • CloudFormation. This is the service AWS provides to turn a json or yaml formatted text file into a deployed stack.
  • S3. The service AWS provides to store objects. Sometimes you want to quickly see the contents of a file, or maybe write something to a file. The AWS CLI does not provide this feature by default.

Introduction

This blog post is divided in 3 steps.

  1. Introduction to aws CLI alias and shortcuts.
  2. Create your own CLI with Python, Click and install it using pip.
  3. Making sane defaults, more shortcuts and forward aws commands to our new CLI.

Prerequisites

  • An AWS account
  • Installed awscliv1 or awscliv2
  • Installed Python 3.8 (on a mac brew install pyenv is highly recommended)
  • An active session in the terminal (aws sts get-caller-identity should work)
  • Basic knowledge of Python, quickly discover boto3 and click documentation

Step 1: AWS CLI Alias

I add the following content to ~/.aws/cli/alias.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[toplevel]
whoami = sts get-caller-identity

myip =
  !f() {
    dig +short myip.opendns.com @resolver1.opendns.com
  }; f

cfn = 
  !f() {
    mycli cfn $@
  }; f

I open a new terminal and try out my shortcuts.

$ aws whoami
{
  "UserId": "AHJHEFSEFSDLKVJERWEFDS:session",
  "Account": "111111222222",
  "Arn": "arn:aws:sts::111111222222:assumed-role/admin/session"
}

$ aws myip
82.53.64.3

$ aws cfn list-stacks
{
  ...
}

Step 2: Creating my own CLI

I start with this code structure. In the future my command will be mycli. It doesn’t really matter what you use, as long it is not being used by other tools. In step 3, we will replace mycli for aws.

The __init__.py files are empty, but make it possible to properly import all modules.

├── README.md
├── mycli
│   ├── __init__.py
│   ├── commands
│   │   ├── __init__.py
│   │   └── s3.py
│   ├── helper.py
│   └── main.py
└── setup.py

setup.py

I create a minimal setup.py. If you ever going to publish your cli, you should fill in all details. Also add all the dependencies. MyCLI is going to use boto3, cfn linters, config parser etc. Extend this list with all the tools you might add to your cli.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
import setuptools

setuptools.setup(
  name="mycli",
  version="0.0.1",
  packages=setuptools.find_packages(),
  entry_points={
      'console_scripts': ['mycli = mycli.main:cli']
  },
  install_requires=[
    'boto3', 
    'requests', 
    'configparser', 
    'click', 
    'cfn_flip', 
    'pylint', 
    'terminaltables', 
    'cfn-lint'
  ],
  python_requires='>=3.8',
)

mycli/main.py

Here we merge all sub commands into the main cli. If you want to add another command library. Replace “command” for the actual short description you would like to use. For example: cfn, iam, sts or ecs.

  1. Create mycli/commands/command.py by copying one of the existing commands (mycli/commands/s3.py).
  2. Add a line at the end of the import section on top of the main.py file: from mycli.commands.command import command.
  3. Add cli.add_command(command) after the last added command.
  4. Update the mycli/commands/command.py to avoid collisions.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import click
from mycli.commands.s3 import s3

CONTEXT_SETTINGS = dict(help_option_names=['-h', '—help'])

@click.group(context_settings=CONTEXT_SETTINGS)
def cli(**kwargs):
  pass

cli.add_command(s3)

Btw, there is probably a way to loop through all the modules and import them. For me it’s not a big deal and it doesn’t violate any code style standards. In the example it’s only one import, in my private cli there are more of course.

mycli/helper.py

In the helper.py, I’ll collect all functions that are used multiple times in my application. A lot of things are presented as a dictionary and with pprint(obj) I can easily convert that to json.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import json
import datetime
import boto3

def default(o): 
  if isinstance(o, (datetime.date, datetime.datetime)): 
    return o.isoformat()

def pprint(obj):
  if 'ResponseMetadata' in obj:
    del obj['ResponseMetadata']
  print(json.dumps(obj, indent=2, default=default)) 

mycli/commands/s3.py

I’ll give you the s3 as an example. AWS already created a few shortcuts for working with S3. The features it lacks of and I want to use quite often:

  • mycli s3 cat s3://mybucket/myfile.txt. It shows the content of a file. Don’t do this with large files.
  • mycli s3 make s3://mybucket/myfile.txt "Hello World" . It creates an S3 object with some content in it.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import click
import boto3
import json
import string
from mycli.helper import pprint

def s3_split(s3path):
  parts = s3path.split('/')
  if parts[0] != 's3:':
    print("use s3://bucket/path/to/file.txt")
    exit()
  bucket = parts[2]
  key = s3path[(len(parts[2])+6):]
  return bucket, key

@click.group()
def s3():
  pass

# cat 
@s3.command()
@click.argument('s3path')
def cat(**kwargs):
  bucket, key = s3_split(kwargs['s3path'])
  client = boto3.client('s3')
  result = client.get_object(Bucket=bucket, Key=key)
  text = result["Body"].read().decode()
  print(text)

# make 
@s3.command()
@click.argument('s3path')
@click.argument('content', default="")
@click.option('--acl', 
  default='private',
  help="Access Control List: private | public-read")
@click.option('--content-type', 
  default='text/plain',
  help="Content type of the file: text/plain | text/html | ..")
def make(**kwargs):
  bucket, key = s3_split(kwargs['s3path'])
  client = boto3.client('s3')
  client.put_object(
    ACL=kwargs['acl'],
    Body=bytes(kwargs['content'], 'utf-8'),
    ContentType=kwargs['content_type'],
    Bucket=bucket,
    Key=key
  )

Installation

Locate the setup.py and in that folder execute the following command. This way you can easily make changes to you awscli, that are immediately available. The installation created symbolic links to your project folder. So after moving the folder, maybe reinstall.

$ pip install -e .
Installing...

Now it’s installed I can see a few help files.

$ mycli --help 
Usage: mycli [OPTIONS] COMMAND [ARGS]...

Options:
  -h, --help  Show this message and exit.

Commands:
  s3
$ mycli s3 --help
Usage: mycli s3 [OPTIONS] COMMAND [ARGS]...

Options:
  -h, --help  Show this message and exit.

Commands:
  cat
  make
$ mycli s3 make --help
Usage: mycli s3 make [OPTIONS] S3PATH [CONTENT]

Options:
  --acl TEXT           Access Control List: private | public-read
  --content-type TEXT  Content type of the file, like text/plain or text/html
  -h, --help           Show this message and exit.

Step 3: Add mycli to aws cli & more

Now I want to let my brand new cli to work with the aws cli. I can use aliases here as well. I already have a shortcut for cfn (cloudformation), now I’ll create one for s3 as well: s3a. Just because something shorter than “s3” will is not practical. And s3a makes sense, the a standing for alias. Now aws s3 still works, aws s3api too, and aws s3a which is an alias for aws s3 and mycli.

23
24
25
26
27
28
29
30
31
32
s3a = 
  !f() {
    if [ "$1" == "make" ]; then
      mycli s3 $@
    elif [ "$1" == "cat" ]; then
      mycli s3 $@
    else
      aws s3 $@
    fi
  }; f

Now I have both the original mycli available, and the alias aws s3a. It sends the commands to mycli or to aws s3.

$ mycli s3 cat s3://mybucket/myfile.txt
$ aws s3a cat s3://mybucket/myfile.txt

$ mycli s3 make s3://mybucket/myfile.txt "Hello World"
$ aws s3a make s3://mybucket/myfile.txt "Hello World"

$ aws s3a ls
Bucket1
Bucket2

I’ll add also these shortcuts and sane defaults for CloudFormation. I created my own cli for cloudformation as well, but it’s Work in Progress, so not to be shared.

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
cfn = 
  !f() {
    if [ "$1" == "list" ]; then
      aws cloudformation list-stacks \
      --query "StackSummaries[?StackStatus != 'DELETE_COMPLETE' && starts_with(StackName, '${2}')].{StackName: StackName, StackStatus: StackStatus, UpdateTime: LastUpdatedTime}" \
      --output table
    elif [ "$1" == "describe" ]; then
      aws cloudformation describe-stacks --stack-name $2
    elif [ "$1" == "delete" ]; then
      aws cloudformation delete-stack --stack-name $2
    elif [ "$1" == "outputs" ]; then
      aws cloudformation describe-stacks \
        --stack-name $2 \
        --query "Stacks[].Outputs[].{OutputKey: OutputKey, OutputValue: OutputValue}" \
        --output table
    elif [ "$1" == "resources" ]; then
      aws cloudformation describe-stack-resources \
        --stack-name $2 \
        --query "StackResources[].{ResourceStatus: ResourceStatus, LogicalResourceId: LogicalResourceId, PhysicalResourceId: PhysicalResourceId}" \
        --output table
    elif [ "$1" == "events" ]; then
      aws cloudformation describe-stack-events \
        --stack-name $2 \
        --query "StackEvents[].[Timestamp,ResourceStatus,LogicalResourceId,ResourceStatusReason]" \
        --output table
    elif [ "$1" == "launch" ]; then
      aws cloudformation deploy \
        --stack-name $2 \
        --template-file $3 \
        --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM
    else
      aws cloudformation $@
    fi
  }; f

$ aws cfn launch teststack template.yml
...
$ aws cfn list
------------------------------------------------------------------------
|                              ListStacks                              |
+-------------+-------------------+------------------------------------+
| StackName   |    StackStatus    |            UpdateTime              |
+-------------+-------------------+------------------------------------+
|  teststack  |  UPDATE_COMPLETE  |  2020-04-08T18:49:08.717000+00:00  |
+-------------+-------------------+------------------------------------+
$ aws cfn resources
...
$ aws cfn events
...
$ aws cfn outputs
...
$ aws cfn delete teststack
...
$ aws s3a make --help
Usage: mycli s3 make [OPTIONS] S3PATH [CONTENT]

Options:
  --acl TEXT           Access Control List: private | public-read
  --content-type TEXT  Content type of the file, like text/plain or text/html
  -h, --help           Show this message and exit.
...

Conclusion

We have learned how to extend the aws cli with some shortcuts, sane defaults, and our own mycli based on Python and Click.

I’m working on an aws cfn deploy command in python. This command validates the template for errors (cfn-lint), tiggers the deployment using changesets (with confirmation), and shows all events while the stack is created or updated. Parameters like tags and stack parameters are key value in json objects.

I also have a command aws login cli <profile>, that finds the specified profile, asks for my MFA token, assumes a role and write the temporary credentials in the specified --profile or default profile. This profile can easily be used by other applications without storing the temporary credentials in arbitrary locations or pass as environment variables. It also works well with docker containers.

-Martijn

Update May 4, 2020

I just discovered some strange behaviour when you’re trying to use a command to login. My idea was to build something like this: aws login cli <source> --profile <target>. It finds the role information in , assumes the role and writes into the target profile specified by --profile or the AWS_DEFAULT_PROFILE env var if this is set. But, the aws cli first validates these parameters. Which simply means: --profile cannot be used and if you specify an non existing AWS_DEFAULT_PROFILE, it’s also catched before executing your command.

Photo by Juan Gomez on Unsplash

Author(s)
Martijn van Dongen
AWS Tribe Lead
Contact
Martijn van Dongen
Free Agent / AWS Tribe Lead

+31651175017
martijn@hitthecloudrunning.com
KVK / VAT on request