Pavel Gasanov

Django + Docker + AWS EB = <3 part 2

31 March 2019

A while ago I wrote a simple guide on running django application in docker container on AWS. Intentionally, I kept several quite important details out of the text in order to keep it simple. While they are not required to just run django app, they are quite needed later on to actually provide content. Today we are going to fix them.

Private docker hub repository

You spend a lot of time developing your app - you probably want to keep it away from public. You can setup your own docker registry, use one of many available online or use private repository on docker hub. Since docker hub limits only to 1 private repo and it’s pretty complex to setup your own, it’s better to use AWS own docker registry called ECR.

As we are going to use AWS ECR with terminal, you’ll need to get AWS CLI. Install it with pip3 install awscli --upgrade --user as per AWS docs.

Run aws configure to setup credentials and default region. Use the same details for user we created earlier for elastic beanstalk.

Run aws ecr create-repository --repository-name myproject to create repository. In response you will get a JSON containing details of your repository. Save repositoryUri - this is your new repo address.

Login your docker account to AWS ECR with $(aws ecr get-login --no-include-email --region us-east-2). This will get and execute docker login command with auto generated credentials.

Add a tag for a new repository uri for your image with docker tag myproject:latest YOUR_REPOSITORY_URI:latest. It links your local image with repository_uri image.

Push your docker image with docker push YOUR_REPOSITORY_URI:latest. If everything is good, you can see it is uploading the same way as with docker hub. At the end, you can check that it is uploaded with aws ecr describe-images --repository-name myproject - response will contain image details.

All left to do is update following in, so elastic beanstalk will know where to fetch image:

"Image": {
   "Update": "true"

Now run make deploy to reupload. Check your environment url - if everything works, you shouldn’t notice any changes, but now it gets image from secure private repository, where only you have access.

Don’t forget to replace repository uri in makefile!

Static files

One of the important thing that wasn’t touched before is serving static (and media) files. That’s your stylesheets, scripts, images and so on, that you will want to serve not from gunicorn/django, but rather directly from nginx to keep it fast.

Achieving this is qutie complex. We will need to:

For the sake of keeping tutorial simple let’s not create any static files by ourself - thankfully django provide default admin app which has multiple static files. We can check one of them in order to make sure that our static files are serving correctly.

For example, we will check following css file: /static/admin/css/fonts.css

Django config

First, you need to properly configure django to store static files in separate folder, as explained in django docs. This way django will copy all static files across its apps and put it in that folder for nginx to serve independently.

Add following to app/myproject/

# ...
STATIC_URL = '/static/'
STATIC_ROOT = "/srv/static/"

You’ll also need to run django command to actually copy those files every time container starts. For that add following to config/app/

# Copy all static files to specific directory for nginx
python collectstatic

Docker-compose shared volume

Even though django saves files in a directory, they are still located in different container, which means nginx won’t be able to access them. We can serve them via gunicorn, but that will lead with bad perfomance and generally not advised.

Better solution would be to create shared volume between app and nginx containers. For that you’ll need to modify docker-compose.yml

version: '3'

  # ...
    # ...
      - ./app:/app
      - static:/srv
    # ...
      - ./config/nginx:/etc/nginx/conf.d
      - static:/srv:ro


After that everything that copied to /srv directory in app container will be transfered to nginx. It won’t work vice versa because volume in nginx is mounted with ro read only permissions.

Nginx config

Now that our files in nginx container, we will need to serve them. That means we need to add following to config/nginx/app.conf:

# ...

server {
  # serve static files directly
  location /static/ {
    alias /srv/static/;
    autoindex off;

  # ...

At this point you can run make run and check that static files exists and served. Go ahead and check - you should see stylesheet for default django admin panel.

AWS config

Our AWS uses single docker container to serve our app. That means when we deploy and build it, it actually do following:

At this point we already have docker image configured, so we need to mount static files volume to host and reconfigure nginx.

Mounting volume is easy - all we need is to add directory paths for both container and host directories in

"Volumes": [
    "HostDirectory": "/srv",
    "ContainerDirectory": "/srv"

Changing nginx config is quite tricky. It is located directly in envirionment, so we need to change it only after we setup it. With deploying our we need to provide specific directory called .ebextensions that will have config files with specific yaml syntax.

Create .ebextensions/proxy.config:

    mode: "000644"
    owner: root
    group: root
    content: |
        map $http_upgrade $connection_upgrade {
            default        "upgrade";
            ""            "";

        server {
            listen 80;

            gzip on;
            gzip_comp_level 4;
            gzip_types text/html text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

            if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
                set $year $1;
                set $month $2;
                set $day $3;
                set $hour $4;
            access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;

            access_log    /var/log/nginx/access.log;

            location /static/ {
                alias /srv/static/;
                autoindex off;
            location / {
                proxy_pass            http://docker;
                proxy_http_version    1.1;

                proxy_set_header    Connection            $connection_upgrade;
                proxy_set_header    Upgrade                $http_upgrade;
                proxy_set_header    Host                $host;
                proxy_set_header    X-Real-IP            $remote_addr;
                proxy_set_header    X-Forwarded-For        $proxy_add_x_forwarded_for;

    mode: "000755"
    owner: root
    group: root
    content: |
      #!/bin/bash -xe
      rm -f /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf
      service nginx stop 
      service nginx start      

The one we are going to use will do 2 things:

Note that our nginx config is a little bit different from the one we use in docker-compose:

After that, run make deploy and check the same file on AWS.

Does not work? Try to connect via eb ssh and check /etc/nginx/conf.d. Make sure that it has /etc/nginx/conf.d/proxy.conf. Also, make sure that /srv contains your static files.

Production vs Development settings

By default django create app for development - meaning that it will give full description of any exceptions or errors it encounters.

You can check any problems with settings with command python check --deploy. You can see that there a lot of security warnings - such as

There are multiple ways of doing this, but we are going to use official one by utilizing DJANGO_SETTINGS_MODULE env variable. This variable contains python path to the settings file django uses. In production we will keep default myproject.settings, but in development we will use new one called myproject.settings_dev that will copy production settings and add/change some of them.

Create file app/myproject/

# Development settings - DO NOT USE THIS IN PRODUCTION
from myproject.settings import *

# some constant secret key
SECRET_KEY = 'g!@a2x+)1y$w_zo7q(vei!q90-tk_$si97%zfx%r^5_sw1o%sg'

# Debug mode turned on
DEBUG = True

# Allowing all hosts

Now that we have development settings, we can tight and fix some security flaws in production settings. Modify app/myproject/

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY')

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False

# By default all hosts are forbidden - you should change it to your environment URI.

We are going to use env variable to set SECRET_KEY - this way it doesn’t leak to docker image so only AWS EB admin can access it. In this example ALLOWED_HOSTS is set to empty, but in your case this should be your EB envrionemnt url.

In order to enable development mode in your docker-compose.yml file:

    # ...
      - DJANGO_SETTINGS_MODULE=myproject.settings_dev

If you push your code to AWS, you’ll notice that it doesn’t work - that’s because it doesn’t have SECRET_KEY. To generate random yourself, you can use default django secret key generator: docker-compose exec app python -c "exec(\"from import get_random_secret_key\nprint(get_random_secret_key())\")"

To set it in EB, call following command eb setenv DJANGO_SECRET_KEY="YOUR_SECRET_KEY". Don’t forget quotes! You can check that it’s set by running eb printenv.

Now redeploy your app with make deploy. If you are getting Not Found - congratulations! That means it actually works, since we don’t have any pages to show and debug is turned off. If 400 Bad request - you probably forgot to change ALLOWED_HOSTS. Check static files - they don’t care for permissions.

Setting database

Our local version has postgres from the beginning - but cloud version does not. If we are going to use it, then we should set it up. As we are already using everying in AWS, we can go for AWS RDS - Amazon Relation Database Service to provide us with postgres for our needs.

We could generate it when we were creating envrionment by specifying eb create --database --database.engine postgres that will handle database creation and adding required policies, but that would require setting up new environment. As for already existing environment, it seems that there is no easy cli command to do everything altogether.

However, we can do it manually in AWS console. Open your environment, go to Configuration, Database. Change following and press Apply:

Creating database will take some time. It will create RDS postgres database and provide required policies to access it. Default database name is postgres. After a while and bunch of event logs return back to configuration menu and look for database endpoint - that’s your database uri.

You need to specify credentials for django app. Go to app/myproject/ and modify database settings. We will take it from env variables.

    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.environ.get('DJANGO_DB_NAME'),
        'USER': os.environ.get('DJANGO_DB_USER'),
        'PASSWORD': os.environ.get('DJANGO_DB_PASS'),
        'HOST': os.environ.get('DJANGO_DB_HOST'),
        'PORT': os.getenv('DJANGO_DB_PORT', 5432),


Now you can redeploy everything to AWS with make deploy. You’ll not notice any changes - for it you have to:

It will retrigger migrations and if it goes without error means that our database got populated with default django structure. You can go ahead and create superuser and try to access django admin!


Deploying apps on AWS is quite complex task - there are a lot of things that should be accounted for, like setting proper configurations and permissions. Hopefuly this guide helped you troubleshoot some common problems or give you some ideas. If you encounter any problems, feel free to comment here or on github.