Pavel Gasanov

Django + Docker + AWS EB = <3

21 July 2018

This tutorial is describing process of creating and deploying Django application on Amazon Web Service using docker containers. It is designed for new developers, who didn’t have previous experience using all or any of those services.

Why exactly this stack?

Django is a popular python web-framework, which has been my favorite for a long time. It’s simple yet powerful, supplied with a lot features, has strong community and great for making database-driven complex websites.

Docker allows to deploy application on the host services exactly as they were developed on work machine, meaning you have no problems with missing dependencies, conflicts between software and so on.

AWS provide a lot of amazing traits, such as automatic scalability, load balancing, capacity provisioning. It is great platform for your application, but it is not that easy to understand how it is working for a newcomer.

Argh! Another guide!

Currently there are some instructions on how to create and deploy django app into docker container:

  1. Official docker guide on running django app in container.
    While great for beginners, it’s not meant to be used in production environment. It uses sqlite, django build-in webserver, not packed in image and so on.

  2. Introduction to Django on AWS Elastic Beanstalk with Docker by Glyn Jackson
    This awesome article brings good insight on how AWS is working internally, but a little bit outdated, uses old file structures. And, for some reason, Glyn container didn’t work for me out of the box (had ownership issues with code directory).

Prerequisites

This guide was done inside VirtualBox on a freshly installed Ubuntu 18.04 with minimal installation, so you should not encounter any unexpected problems. However, if something went wrong and google\StackOverflow can’t help you, don’t hesitate to contact me.

We are going to use:

It would be great if you at least know that those are not pokemons. Anyhow, it is not required to know, as long as you keep pressing the same buttons I do!

You can get all the code from GitHub.

How it’s going to be

We are going to:

  1. Create new docker image with django app on top of official python 3 image. This image is going to include django and gunicorn webserver.

  2. Create stack of following docker images using docker-compose:
    • Django + gunicorn
    • Nginx
    • Posgresql

    At this point you should be able to run app on your work machine.

  3. Prepare and deploy on AWS Elastic Beanstalk.
    At this point anyone can access your app from internet.

  4. We even create simple makefile to automatize running app on work machine and deploying app on AWS!

Django

Start small by creating project folder with following structure:

myproject/
├── app/
└── config/
    ├── app/
    └── nginx/

App is going to be for django application, config is going to have configuration files for everything.

Assuming your working machine already has python3, you need to create new virtual environment and install required packages.

Create file config/app/requirements.txt Note that we specify exact versions of the packages. Packages might change behaviour with different versions leading to unexpected conflicts. Since I want image in future work exactly as right now, I use strict package versions.

When you get comfortable with whole stack, you should update versions to last one and keep them in check, since newer versions repair bugs and vulnerabilities.

Django==2.0.7
gunicorn==19.6.0
psycopg2==2.7.5

I used to create django app from containers itself, but after a while I realised that is not really that much useful - I was still using python and virtualenv on work machine for debugging, linting, code completion and CLI.

So, run following commands in terminal:

# Install virtualenv and python pip
sudo apt-get install virtualenv python3-pip

# Upgrade virtualenv
pip install --upgrade virtualenv

# Create virtualenv at myproject/.venv with python3
virtualenv -p python3 .venv

# Activate virtualenv
source .venv/bin/activate

# Install dependencies in our virtualenv
pip install -r config/app/requirements.txt

# Create default django application at myproject/app
django-admin.py startproject myproject app

Now that your default django app is crated, last thing before putting it inside docker will be changing default settings. We are going to use postgres, so we need to make changes in settings of the app app/myproject/settings.py

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'postgres',
        'USER': 'postgres',
        'HOST': 'db',
        'PORT': 5432,
    }
}

Additionally, django uses ALLOWED_HOSTS directive, which by default blocks any non-localhost connections. We need to add 2 hostnames: one for local environment (which is going to use docker-compose) and one for staging environment (which AWS will provide, so keep in mind that it will change).

ALLOWED_HOSTS = ['app']

Docker image

Go to Docker installation guide and grab the one for your own machine. Details might change in future, so it’s better to do this from up-to-date source.

Currently for Ubuntu 18.04 it’s:

# Install docker and docker-compose
sudo apt-get install docker docker-compose

# Add current user to Docker group (so we can run docker not from sudo)
usermod -aG docker ${USER}

# Reboot
sudo reboot

Next, create file config/app/Dockerfile and put following content in it:

# Creating image based on official python3 image
FROM python:3

# Your contacts, so people blame you afterwards
MAINTAINER Pavel Gasanov <pogasanov@gmail.com>

# Sets dumping log messages directly to stream instead of buffering
ENV PYTHONUNBUFFERED 1

# Creating and putting configurations
RUN mkdir /config
ADD config/app /config/

# Installing all python dependencies
RUN pip install -r /config/requirements.txt

# Open port 8000 to outside world
EXPOSE 8000

# When container starts, this script will be executed.
# Note that it is NOT executed during building
CMD ["sh", "/config/on-container-start.sh"]

# Creating and putting application inside container
# and setting it to working directory (meaning it is going to be default)
RUN mkdir /app
WORKDIR /app
ADD app /app/

When container is started, it should make migrations and update database accordinly and then start gunicorn server.

Create file config/app/on-container-start.sh

# Create migrations based on django models
python manage.py makemigrations

# Migrate created migrations to database
python manage.py migrate

# Start gunicorn server at port 8000 and keep an eye for app code changes
# If changes occur, kill worker and start a new one
gunicorn --reload myproject.wsgi:application -b 0.0.0.0:8000

Create nginx configuration at config/nginx/app.conf

# define group app
upstream app {
  # balancing by ip
  ip_hash;

  # define server app
  server app:8000;
}

# portal
server {
  # all requests proxies to app
  location / {
        proxy_pass http://app/;
    }

  # only respond to port 8000
  listen 8000;

  # domain localhost
  server_name localhost;
}

Docker compose

Now you can build docker image, however for work machine it’s far more useful to use docker-compose.

Docker-compose allow you to create and connect multiple images. It uses file docker-compose.yml.

Our stack is going to have 3 images:

In our image we specify how it’s going to be build, what command should be started when container start running, what is his hostname, which port

Create file docker-compose.yml in project root with following content:

# File structure version
version: '3'

services:
  # Database based on official postgres image
  db:
    image: postgres
    hostname: db

  # Our django application
  # Build from remote dockerfile
  # Connect local app folder with image folder, so changes will be pushed to image instantly
  # Open port 8000
  app:
    build:
      context: .
      dockerfile: config/app/Dockerfile
    hostname: app
    volumes:
      - ./app:/app
    expose:
      - "8000"
    depends_on:
      - db

  # Web server based on official nginx image
  # Connect external 8000 (which you can access from browser)
  # with internal port 8000(which will be linked to app port 8000 in configs)
  # Connect local nginx configuration with image configuration
  nginx:
    image: nginx
    hostname: nginx
    ports:
      - "8000:8000"
    volumes:
      - ./config/nginx:/etc/nginx/conf.d
    depends_on:
      - app

Now you are ready to start your django app! Just type docker-compose up!

You can notice that is pulling postgresql and nginx images from docker hub, building your new image and connecting them altogether. This process will take some time with first launch, but after that it will be quite fast - docker will keep images.

When you see that server is started, open your browser and go to http://127.0.0.1:8000.

Default django welcome page

Docker Hub

Next step is going to prepare your docker image and push it to Docker Hub. This requires docker hub account, so go on and register one for you on docker hub website.

Run docker build -t pogasanov/myproject -f config/app/Dockerfile . This will build a new image just like in docker-compose, except it will tag it with custom name. This name should be <Your Docker Hub username>/<Your Image name>.

Now since we are going to push our image to docker hub, we have to login with out credentials using docker login

You can upload image to docker hub using docker push pogasanov/myproject

Congradulations! Now your django app is on docker hub! Onwards to the last step!

AWS EB

The last part will be to deploy our code onto web.

First of all, you need to register on AWS services. Amazon has a free-tier of services, which allows you to deploy 1 environment with 1 container free of charge for 1 year.

You need to install AWS EB command line interface. Run pip install awsebcli

AWS use special file Dockerrun.aws.json, which is remotely looks like docker-compose.yml, but use settings specific to AWS. We specify version of the file, docker image and exposed port.

{
 "AWSEBDockerrunVersion": "1",
 "Image": {
   "Name": "pogasanov/myproject:latest",
   "Update": "true"
 },
 "Ports": [
   {
     "ContainerPort": "8000"
   }
 ]
}

You need to create user with required permissions for ElasticBeanstalk. AWS use service called IAM (Identity and Access Management) to manage users, groups and their permissions.

Go to IAM. Create new group with following permissions:

And create new user and put it into this group.

Get Security credentials for this user. You going to need access key id and secret access key.

Now you are ready to work with AWS EB. Type in eb init and answer on questions:

This will create .Elasticbeanstalk/config.yml with your project configuration.

Now you can create environment on AWS with eb create. As previous, answer on following questions:

AWS will create new environment, check your project root for two files - Dockerfile and dockerrun.aws.json. Since we have Dockerfile in separate directory, it will go with dockerrun.aws.json

We go with dockerrun.aws.json because otherwise AWS will build docker image by itself and run it instead. This will slow down deployment significantly (especially if you have complex image with multiple dependencies) and kinda ruin our whole plan having the same environment both on work machine and on web.

When AWS upload file on web, it will find out that it depends on docker hub image, download and run it. After that, it will connect internal nginx server on specified port.

You can find URL of your new web environment on application page. For me it’s myproject-staging.us-east-2.elasticbeanstalk.com

If you updated your docker image and want to push it to AWS, simply type eb deploy It will reupload project and trigger update.

Makefile

It’s probably a good idea to automate some boring parts. Write simple makefile in project root.

.PHONY: all build push deploy run stop

all: build push deploy

build:
	docker build -t pogasanov/myproject:latest -f config/myproject/Dockerfile .

push:
	docker push pogasanov/myproject:latest

deploy:
	eb deploy

run:
	docker-compose up -d

stop:
	docker-compose down

What’s next

This guide already big enough, but there are still work to be done.

Next part is available here.