Exploration into ECS (Elastic Container Service) -- Part I - Night2

Per the tutorial instructions you should be able to just run the Python setup script (setup.py), Run "python setup.py -m setup -r <your_region>". If you don't know how to refer to your region, look up the code of your prefered region in the table located at AWS - Regions and Availability Zones page. On my mac I have multiple versions of python so I would actually be using python3 instead of python. So I tried

python3 setup.py -m setup -r us-east-1

And unfortunately it failed. I received the error

Traceback (most recent call last):
  File "setup.py", line 11, in <module>
    import boto3
ModuleNotFoundException: no module names 'boto3'

Regarding line numbers throughout this post. They may be off by a bit given either changes to the original script made by authors or my revisions causing shifts in the numbers. You may need to perform string searches to find the exact lines. Note: I have also included a github link at the bottom of the post to my modified script for you to download

It's pretty clear that a python module is missing. Interstingly Boto3 is the AWS SDK for python. To install Boto3 I simply ran

pip3 install boto3

remember I have multiple versions of python on my system so you may just need to use pip instead of the pip3 command

After I successfully installed the boto3 module I then attempted the setup again. This time I received an error from the botocore.

botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the CreateRole operation: The security token included in the request is invalid.

If you look at the documentation for boto3 , you will see that you need proper AWS credentials setup on your local machine in order for the AWS python SDK to be able to communicate to services within your AWS account. Yes if you haven't set up an Amazon account yet, you will need to sign up for at least a free tier account. Disclaimer there will be incurred costs if you leave these cloud formation stacks up and running. The cloud formation stack is using a RDS and relatively large containers. You will probably want to spin up the stack do some testing and tear down the stack. Or be prepared to pay some momey to Amazon. The EC2 instance types defined in the cloud formation are C4.xlarge and M4.large across the different modules. You probably want to swtich this to t2.micro while youa re playing with with the cloudformation stack to try and minimize AWS expenses. You can always increase the sizes later. In the version of the script I posted to GitHub I have changed the EC2 instances to t2.micro instances. If you attempt to do this on your own you will want to change the memory allocation for the EC2 instances that are defined in the python setup script as well. It is set at 1024. Your stack will never spin up appropriately as a t2 maxes out at 1GB of memory. I was seeing a 503 error from the load balancer whenever I tried to spin up the stack. I changed the allocation to 512.

I wanted to avoid using my root administrator account, so I logged into AWS console. After logging in, you should see a dropdown in the top menu for services. The IAM (Identity Access Management) service link should be available in the far left panel if you have accessed it recently or you could search for IAM.

Now proceed to create a new user that will be used to create a cloud formation stack. The user will be used by python scripts and the python AWS SDK that defines the cloud formation stack and all the defined services. Click the Users link to get to the Users page. You should see a button at the top, "Add User". You will brought through a multi-step wizard to create the user. I created a "dev_user", created a custom DevGroup, and assigned the following roles to the group:

  • PowerUerAccess
  • IAMFullAccess
  • AmazonEC2ContainerRegistryFullAccess

When the user is created you will see the key items you need for configuration on your local system: the Access Key and the Secret Access Key. Please note the warning that AWS provides. This will be the only time you will be able to see / download the secret access key. Make sure you have a means to get this key into your config.

If you read through the Boto3 configuration documentation you will see there are multiple options for the credential configuration. I chose to use the shared credential file. You need to vi the credentials file that will be present under .aws directory under your user's home directory.

vi ~/.aws/credentials

Then add the Access Key and Shared Secret Key you downloaded or noted from the user creation within your AWS console under the [default] profile within the aws credentials file.


Now I was ready to attempt to run the setup script once again.

   python3 setup.py -m setup -r us-east-1

This execution cycle I received the following error:

Traceback (most recent call last):
  File "setup.py", line 858, in <module>
File "setup.py", line 848, in main
  setup_results = setup(project_name=project_name,  service_list=service_list, region=region)
File "setup.py", line 532, in setup
File "setup.py", line 497, in docker_login_config
  f.write(json.dumps(data, indent=2))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 238, in dumps
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 430, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict
yield from chunks
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 376, in _iterencode_dict
raise TypeError("key " + repr(key) + " is not a string")
TypeError: key b'989378957521.dkr.ecr.us-east-1.amazonaws.com' is not a string

There are obviously some things wrong with the script. I have a feeling it has to do with recent revs of Python or Boto3. I didn't spend time trying to figure out what reved and caused the issue, I focused on finding what changes I could make to get the script working.

It is somewhat clear from the last line that the script is encountering a bytestring as indicated by the b' prefix to the hostname string, but it is expecting a string. At line 497 of the script I changed the lines

        hostname: {
            "auth": ecr_login_token 


        str(hostname, 'utf-8'): {
            "auth": str(ecr_login_token, 'utf-8')

Ok. Time to give the setup script another whirl. Progress, I now saw the login to AWS succeeding and the docker configuration being created. But now it failed at the point of interacting with the AWS ECR (Elastic Container Registry).

INFO:__main__:Create ECR repository
Traceback (most recent call last):
File "setup.py", line 855, in <module>
File "setup.py", line 845, in main
  setup_results = setup(project_name=project_name, service_list=service_list, region=region)
File "setup.py", line 546, in setup
  os.environ["docker_registry_host"] = uri.split('/')[0]
TypeError: a bytes-like object is required, not 'str'

Well a similar issue but in reverse this time. Here I made a line change at line 543. Change

 uri = create_repository_response['repository']['repositoryUri'].encode('utf-8')


 uri = create_repository_response['repository']['repositoryUri']

Is the 3rd time a charm?

Unfortunately not. It looks like items are in a bit of an incomplete state.

INFO:botocore.vendored.requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): ecr.us-east-1.amazonaws.com
Traceback (most recent call last):
File "setup.py", line 855, in <module>
File "setup.py", line 845, in main
  setup_results = setup(project_name=project_name,service_list=service_list, region=region)
File "setup.py", line 539, in setup
  create_repository_response = ecr_client.create_repository(repositoryName=service)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.RepositoryAlreadyExistsException: An error occurred (RepositoryAlreadyExistsException) when calling the CreateRepository operation: The repository with name 'spring-petclinic-rest' already exists in the registry with id '989378957521'

It appears there is already an item in the Elastic Container Registry (ECR) for the spring-petclinic-rest project. The previous runs of the steup.py script has probably created some artifacts and have left things in an incomplete state. I went and checked the ECR service through the AWS console and sure enough there was an existing entry for the spring-petclinic-rest container. So I decided to see how well the cleanup option on the script would work even in an incomplete state.

python setup.py -m cleanup -r <your region>

I didn't see anything in the log output to indicate that the registry was cleaned up, but when I went back to the AWS console and performed a refresh, I saw that the there were no longer any registries nor containers defined.
Please note that a delete of a cloud formation stack can take quite awile 15 - 20 minutes. The script does continuously check the "DELETE_IN_PROGRESS" state and eventually should complete.

So now I went back to my terminal and ran the setup script once again. I ran into a few more problems with expecting strings and getting bytestrings or vice versa. Here are the additional lines that needed to be modifed. Line 605 changed from

elb_arn = create_elb_response['LoadBalancers'][0]['LoadBalancerArn'].encode('utf-8')


elb_arn = create_elb_response['LoadBalancers'][0]['LoadBalancerArn']

And Line 623 changed from

target_group_arn = create_target_group_response['TargetGroups'][0]['TargetGroupArn'].encode('utf-8')


target_group_arn = create_target_group_response['TargetGroups'][0]['TargetGroupArn']

And Line 639 changed from

listener_arn = create_listener_response['Listeners'][0]['ListenerArn'].encode('utf-8')


listener_arn = create_listener_response['Listeners'][0]['ListenerArn']

And Line 663 changed from

target_group_arn = create_target_group_response['TargetGroups'][0]['TargetGroupArn'].encode('utf-8')


target_group_arn = create_target_group_response['TargetGroups'][0]['TargetGroupArn']

And Line 585 changed from

dns_name = stack_create_status['Stacks'][0]['Outputs'][0]['OutputValue'].encode('utf-8')


dns_name = stack_create_status['Stacks'][0]['Outputs'][0]['OutputValue']

And Line 857 changed from

logger.info("Setup is complete your endpoint is http://"+ setup_results)


logger.info("Setup is complete your endpoint is http://"+ str(setup_results, 'utf-8'))

There was also an issue of

INFO:__main__:Create resources for service: spring-petclinic-rest
Traceback (most recent call last):
  File "setup.py", line 860, in <module>
  File "setup.py", line 850, in main
    setup_results = setup(project_name=project_name, service_list=service_list, region=region)
  File "setup.py", line 647, in setup
    Name=project_name + str(service_list.keys().index(service)) + '-tg',
AttributeError: 'dict_keys' object has no attribute 'index'

This is an issue of going from Python 2 to Python 3. You might want to read this post reagarding the change https://blog.labix.org/2008/06/27/watch-out-for-listdictkeys-in-python-3

Line 648 changed from

Name=project_name + str(service_list.keys().index(service)) + '-tg',


Name=project_name + str(list(service_list.copy().keys()).index(service)) + '-tg',

This issue was also present at Line 687

'image': repository_uri[service_list.keys().index(service)][service] + ':latest',

changed to

'image': str(repository_uri[list(service_list.copy().keys()).index(service)][service]) + ':latest',

Eventually I got to a point were the script was completing successfully. Similar to the delete stack process you may see the Create stack (status: CREATE_IN_PROGRESS) take a prolonged time to complete.

After the the setup was finally completed I could then open my preferred browser and hit the various endpoints. The endpoint as indicated in the log output (AWS ELB url) plus one of the below:

  • /
  • /pet
  • /vet
  • /owner
  • /visit

All worked appropriately and provide a result. The / will bring you to a 'Welcome to PetClinic' page. The others will provide a json response.

Source of modified script (hosted on GitHub):