wagcontent
v1.0.0
Published
Django application for running content site behind wagwalking.com Maintains it's own postgres database along with a CMS and Admin site. Build to host on Heroku with assets deployed to S3
Downloads
46
Readme
WagContent
Django application for running content site behind wagwalking.com Maintains it's own postgres database along with a CMS and Admin site. Build to host on Heroku with assets deployed to S3
Navigation
Dependencies
Quickstart
Build the environment
$ cp settings.example .env
$ docker-compose up --build -d
$ docker-compose exec app python3 manage.py migrate
$ docker-compose exec app python3 manage.py createcachetable
$ docker-compose exec app python3 manage.py createsuperuser
$ docker-compose exec app python3 manage.py loadfixtures wagcontent/*/fixtures/* -v3 -i
$ nvm use 10
$ npm i
$ npm run webpack:build-dev
Navigate to http://localhost:8000/admin/
and login with your super user!
Useful commands:
Django
$ docker-compose exec app python3 manage.py migrate
- run any outstanding database migrations$ docker-compose exec app python3 manage.py createcachetable
- create db-backed caches$ docker-compose exec app python3 manage.py createsuperuser
- create a new super user$ docker-compose exec app python3 manage.py loadfixtures wagcontent/*/fixtures/* -v3 -i
- seed the database with test data
Webpack
$ npm run webpack:watch
- run webpack in local dev mode and reload when watched files change$ npm run webpack:build
- build webpack bundles for production$ npm run webpack:build-dev
- build webpack for local dev mode and exit
DB Management
Creating a PostgreSQL db dump
In order to quickly return to a stable db state from which you can run new model changes or migrations without undergoing the entire re-seeding process, you can create a local snapshot to use as a restore point:
$ cd <project_dir>
$> docker-compose exec db /bin/bash
$> pg_dump -h db -U postgres dev > /bkp/<DUMP_FILENAME>
The dump file will be available in <project_dir>/data/bkp/<DUMP_FILENAME>
Restoring database from PostgreSQL dump
After completing the quickstart tutorial:
- Move the file you want to restore from into
<project_dir>/data/bkp/<DUMP_FILENAME>
- Execute:
$ cd <project_dir>
$ docker-compose run db bash
$ psql -h db -U postgres
<specify db password>
postgres=# DROP DATABASE dev;
postgres=# CREATE DATABASE dev;
postgres=# \q
$ psql -h db -U postgres -f /bkp/<DUMP_FILENAME> dev
<specify db password>
Note: you may need to stop the app and celery container if the DROP action returns an exception due to open connections. You can do this with: docker stop wagcontent_app_1 wagcontent_celery_1
Building Environment Images
There are two images that are built internally and used for the local dev and CI runtime environments. They are not rebuilt on every deployment, since the dependencies rarely change. If there are dependency updates made, please make sure to tag a build with build-images
in the name. This will rebuild the images described below:
|image|description|
|---|---|
|wagcontent/dpl:latest
|Alpine linux-based image which comes pre-installed with ruby 2.4. Used with the dpl gem, which is a multi-host deployment tool with support for heroku deployment|
TODO is this now python 3.8?
|gdal-python-runtime:latest
|Alpine linux-based image which comes pre-installed with python2.7. It includes the GDAL geo-spatial library, which is built into the image |
Making changes to the .env
, docker-compose.yml
or other build files
In order for changes you have made to these build files to propagate it's necessary to refresh stale containers and restart the container that you changed.
- Make changes
docker-compose pull
to refresh the stale containersdocker-compose --up --build -d <mutated-container(usually app>
to rebuild the stale container
Notes:
- Some environment variables can be taken from the pubdev environment on Heroku. For example,
AWS_S3_ACCESS_KEY_ID
andAWS_S3_SECRET_ACCESS_KEY
should be taken from Heroku pubdev to ensure that s3 direct uploads work locally - If you choose to enable livereload, you will need to ensure that your default site points to localhost:
$ docker-compose run app python3 manage.py shell
$ from django.contrib.sites.models import Site
$ s = Site.objects.first()
$ s.domain = 'localhost'
$ s.save()
$ exit()
Migration to ECS Notes:
- Environment variables now live in 2 places: the settings.$env file for public values, and the AWS Parmeter Store for secrets. Secrets are accessed via chamber. Anything needed by the CI pipeline can be fetched from the Parameter Store via 'chamber read'. E.g.: aws ssm get-parameters --region us-west-1 --names /wagcontent/django_secret_key --with-decryption --query 'Parameters[].Value' --output text
- Python runtime environments now use the dotenv package to load the contents of the .env file to python's environment. This file contains non-secrets and gets baked into the docker image that's pushed to ECR.
- The python runtime environment is built using the Dockerfile in docker/ci.