Tejas Mandre | Blog

Use Docker with Django Celery RabbitMQ PostgreSQL Nginx Simplest explanation 🐋

By in software

Good day everyone, we are going to see how to put a full stack Django application having integrations with RabbitMQ via the celery interface, PostgreSQL . If you don't know docker yet check my short, sweet and simple introductory article here. Other than this Basics of Django and Python are a pre requisite to this tutorial. You can learn basic celery and RabbitMQ from this article itself.

Each of the component mentioned here will be running inside its own container. In total we'll be having 7 containers running and communicating with each other via Docker's REST API or port exposure. Here is a list of containers that we will be building:

  1. Django served using gunicorn(a HTTP server built in python)
  2. Nginx Reverse proxy pointing to the django-gunicorn container
  3. Postgres database container
  4. RabbitMQ server container
  5. Celery worker container
  6. Celery beat container - scheduling task

In the first half if the tutorial we will build a simple Django database connected app that adds numbers and stores the result to the database periodically. In the second half, we'll dockerize the entire application.

Part 1

Create a simple Django app using - make sure to use a virtualenv

pip install django celery django-celery-beat django-celery-results gunicorn psycopg2-binary gevent
django-admin start project core
cd core
python manage.py start app my_app

If you are on a windows based os you may need to install psycopg2 package as well.

With this you should have a basic app set up in the Django along with Celery installed. Next go to docker hub and pull the docker images for PostgreSQL and RabbitMQ. To do this type the following commands in your terminal.

docker pull rabbitmq:3.9-alpine
docker pull postgres:13.6-alpine

If this command fails you may not have docker installed yet. Just google how to install docker as per you operating system, install it and then comeback.

Next go to the settings.py in the core folder of your Django project and add

"django_celery_results",
"django_celery_beat",
"my_app",

to the installed apps.

Run the following on the terminal

python manage.py makemigrations && python manage.py migrate

In the models.py file of my_app create a new model with a single field. We will create entries in this model for checking the DB connectivity with celery

from django.db import models

class AdditionResult(models.Model):
answer = models.IntegerField(default=0)

Now create a new file name tasks.py in the my_app folder and paste the following code

from celery import shared_task
from .models import AdditionResult

result = 1

@shared_task
def add_numbers():
global result
print("Running add numbers periodic task")
result += result
AdditionResult.objects.create(answer=result)

Now create a new file named celery.py in the core folder and paste the following code inside it.

import os
from celery import Celery
from celery.schedules import crontab
from django.conf import settings

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "core.settings")

app = Celery("core", broker="amqp://guest@rabbitmq//")

app.config_from_object("django.conf:settings", namespace="CELERY")

# Celery Beat Settings

app.conf.beat_schedule = {

"periodic_add_numbers": {

"task": "my_app.tasks.add_numbers",
"schedule": crontab(minute="*\1"),

},
}

app.autodiscover_tasks()

@app.task(bind=True)

def debug_task(self):
print(f"Request: {self.request!r}")

Now we have the base Django app ready to test the docker-compose and Dockerfile that we are going to write and run everything as a microservice.

Part 2

Create a new called Dockerfile without any extension at the root level (same level as manage.py). Paste the following content into it.

FROM python:3.8-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt

This will basically

  1. Pull a python slim image from docker hub
  2. Set the env variables for not generating pycache file
  3. Copy all app files into the container
  4. Install the requirements inside the container

Create a new file at the level of manage.py file. Call it docker-compose.yml Paste the following content.

version: "3"
services:
  database:
    container_name: database
    restart: always
    image: postgres:12.7-alpine
    environment:
    - POSTGRES_DB=admin
    - POSTGRES_USER=admin
    - POSTGRES_PASSWORD=password
    volumes:
      - db_data:/var/lib/postgresql/data

  backend:
    container_name: backend
  build: .
    volumes:
    - ./:/app
    depends_on:
      - database
    command: gunicorn -b 0.0.0.0:8000 --worker-class=gevent --worker-connections=1000 --workers=2 core.wsgi

  nginxrp:
    container_name: nginxrp
    restart: always
    build: ./nginx-server
    ports:
      - 8000:80
    depends_on:
      - backend
    volumes:
    - ./staticfiles:/staticfiles

  rabbitmq:
    container_name: rabbitmq
    restart: always
    image: rabbitmq:3.9-alpine
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq

  celeryworker:
    container_name: celeryworker
  build: .
    volumes:
    - ./:/backend
    command: celery -A core.celery worker --pool=prefork -l info
    depends_on:
      - rabbitmq

  celeryscheduler:
    container_name: celeryscheduler
  build: .
    volumes:
    - ./:/app
    command: celery -A core beat -l info
    depends_on:
    - celery

volumes:
  db_data:
rabbitmq_data:

This basically sets up everything needed. To understand this file in detail (since it'll become very long for this post) watch my YouTube video here.

Next a folder called nginx-server at  root level and create 2 new files inside it named Dockerfile and nginx.conf

Paste this in nginx-server/Dockerfile

FROM nginx:1.20.1-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

Paste this in the nginx-server/nginx.conf

server {

    listen 80;

    location / {
        proxy_pass http://backend:8000;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /static {
        alias /staticfiles/;
    }

}

The proxy_pass field should match the service name of the Django app in the docker-compose.yml file. To understand these files in detail (since it'll become very long for this post) watch my YouTube video here.

Next update the database credentials in the settings.py file. They should be the same that are being used in the docker-compose.yml file. For this article it looks like

DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "admin",
"USER": "admin",
"PASSWORD": "password",
"HOST": "database",
"PORT": 5432,
}
}

Next we need to go into the Django app's docker container to run the migrations. For that simply run on the terminal at root level

docker-compose exec backend python manage.py makemigrations && python manage.py migrate

Now spin up the app using

docker-compose up

This will start all the containers. You can keep an eye on the logs in the terminal to ensure that the task is running periodically. To get an in depth understanding of everything happening in this tutorial watch the YouTube video here.