On this page
Production Deployment¶
This guide covers deploying djust applications to production with horizontal scaling, Redis state backend, and WebSocket-aware load balancing.
Architecture Overview¶
+--------------+
| Load Balancer|
| (Nginx) |
+------+-------+
|
+----------------+----------------+
| | |
+-----v-----+ +-----v-----+ +-----v-----+
| Server 1 | | Server 2 | | Server 3 |
| (uvicorn) | | (uvicorn) | | (uvicorn) |
+-----+-----+ +-----+-----+ +-----+-----+
| | |
+----------------+----------------+
|
+-----v-----+
| Redis |
| (State) |
+-----------+
State Backend Configuration¶
In-Memory (Development Only)¶
DJUST_CONFIG = {
'STATE_BACKEND': 'memory',
'SESSION_TTL': 3600,
}
Single server only. State lost on restart.
Redis (Production)¶
import os
DJUST_CONFIG = {
'STATE_BACKEND': 'redis',
'REDIS_URL': os.environ.get('REDIS_URL', 'redis://localhost:6379/0'),
'SESSION_TTL': 7200, # 2 hours
}
Requires Redis 6.0+ and redis-py:
pip install redis
Redis Setup¶
Installation¶
# Ubuntu/Debian
sudo apt update && sudo apt install redis-server
sudo systemctl enable redis-server && sudo systemctl start redis-server
# macOS
brew install redis && brew services start redis
# Docker
docker run -d -p 6379:6379 --name redis redis:7-alpine
Production Configuration¶
Edit /etc/redis/redis.conf:
bind 127.0.0.1 ::1
requirepass your-strong-password-here
# Persistence
save 900 1
save 300 10
save 60 10000
# Memory management
maxmemory 256mb
maxmemory-policy allkeys-lru
# Disable dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
ASGI Server¶
Uvicorn (Recommended)¶
uvicorn myproject.asgi:application \
--host 0.0.0.0 \
--port 8000 \
--workers 4 \
--loop uvloop \
--ws websockets \
--log-level warning \
--access-log
Gunicorn + Uvicorn Workers¶
gunicorn myproject.asgi:application \
-k uvicorn.workers.UvicornWorker \
--workers 4 \
--bind 0.0.0.0:8000 \
--log-level warning
Worker count formula: (2 x CPU cores) + 1
WebSocket per-message compression (permessage-deflate)¶
VDOM patches are highly compressible — typical gzip ratios of 60-80% reduction in wire size for repetitive HTML fragments and JSON patch structures. Both Uvicorn (with the websockets library) and Daphne support the permessage-deflate WebSocket extension out of the box and negotiate it with any modern browser client.
No code change needed in your djust app — the compression is transparent. To verify it's active:
# Confirm the djust config reflects compression state
from djust.config import config
config.get("websocket_compression") # True by default
Memory tradeoff. Each active connection holds a zlib compression context, roughly ~64 KB per connection. For typical deployments (<10k concurrent WebSockets per worker) this is fine — the bandwidth savings dwarf the RSS cost. High-connection-density deployments (100k+ per worker) may want to disable compression:
# settings.py
DJUST_WS_COMPRESSION = False # disable permessage-deflate negotiation
Note that disabling compression in djust's config is advisory — the actual wire-level compression is negotiated by the ASGI server. To enforce the no-compression decision at the server level, pass the appropriate flag to Uvicorn / Daphne (e.g. Uvicorn's --ws-per-message-deflate=false when using websockets). See the deployment runbook for the full set of ASGI-server flags.
Do not combine with a compressing CDN. Cloudflare, AWS CloudFront, and similar CDNs will double-compress and burn CPU on both sides. Either turn off compression at the CDN for the /ws/ path, or disable it in djust.
Nginx Configuration¶
upstream djust_backend {
server 10.0.1.10:8000;
server 10.0.1.11:8000;
server 10.0.1.12:8000;
ip_hash; # Sticky sessions for WebSocket
}
server {
listen 443 ssl http2;
server_name example.com;
# WebSocket
location /ws/ {
proxy_pass http://djust_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 86400;
}
# HTTP
location / {
proxy_pass http://djust_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Docker Compose¶
version: '3.8'
services:
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis-data:/data
web:
build: .
command: uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000
environment:
- REDIS_URL=redis://:${REDIS_PASSWORD}@redis:6379/0
- SESSION_TTL=7200
depends_on:
- redis
deploy:
replicas: 3
nginx:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "443:443"
depends_on:
- web
volumes:
redis-data:
Session Management¶
Session TTL Guidelines¶
| Use Case | Recommended TTL |
|---|---|
| E-commerce | 1-2 hours |
| Admin dashboards | 4-8 hours |
| Public content | 30-60 minutes |
| Real-time apps | 15-30 minutes |
Cleanup for In-Memory Backend¶
Redis handles expiration automatically. For the memory backend, set up periodic cleanup:
# management/commands/cleanup_sessions.py
from django.core.management.base import BaseCommand
from djust.live_view import cleanup_expired_sessions
class Command(BaseCommand):
help = 'Clean up expired LiveView sessions'
def handle(self, *args, **options):
cleaned = cleanup_expired_sessions(ttl=3600)
self.stdout.write(f'Cleaned {cleaned} expired sessions')
# Cron: run every hour
0 * * * * cd /path/to/project && python manage.py cleanup_sessions
Health Check Endpoint¶
from django.http import JsonResponse
from djust.state_backend import get_backend
def health_check(request):
try:
stats = get_backend().get_stats()
return JsonResponse({
'status': 'healthy',
'backend': stats['backend'],
'sessions': stats['total_sessions'],
})
except Exception as e:
return JsonResponse({'status': 'unhealthy', 'error': str(e)}, status=503)
Deploying Behind an L7 Load Balancer (AWS ALB, Cloudflare, Fly.io)¶
When djust runs behind a trusted L7 load balancer that terminates TLS and health-checks
task IPs directly (AWS ALB, Cloudflare, Fly.io, GCP External HTTP(S) LB, etc.), the task's
private IP rotates on every redeploy or autoscale event. Enumerating them in
ALLOWED_HOSTS is not feasible, and setting ALLOWED_HOSTS = ['*'] normally trips
the djust.A010 system check.
djust provides an explicit opt-in for this topology: set both SECURE_PROXY_SSL_HEADER
and a new DJUST_TRUSTED_PROXIES list/tuple, and djust.A010 / djust.A011 will recognize
that the trusted proxy has already performed Host validation at the edge.
# settings.py — only when you really are behind a trusted L7 proxy
ALLOWED_HOSTS = ['*'] # task IPs rotate; edge proxy performs Host validation
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Identify the trusted proxy terminating requests. Any non-empty list/tuple
# opts in — contents are informational and can be used by your middleware.
DJUST_TRUSTED_PROXIES = ['aws-alb'] # or ['cloudflare'], ['fly-edge'], etc.
Security warning. Only use this escape hatch if you are actually behind a trusted L7
proxy that performs Host validation at the edge. On a raw VM / bare-metal with no proxy in
front, ALLOWED_HOSTS = ['*'] opens you up to Host-header attacks — don't paper over the
system check with this setting.
If only one of the two settings is present, djust.A010 / djust.A011 still fire — both are
required so a single typo can't accidentally disable the check.
Deployment Checklist¶
Infrastructure¶
- [ ] Redis 6.0+ installed with password protection
- [ ] Redis bound to private network only
- [ ]
STATE_BACKEND='redis'in production settings - [ ] Sensitive config in environment variables
- [ ] ASGI server (uvicorn) with multiple workers
- [ ] Nginx with WebSocket proxy and sticky sessions
- [ ] HTTPS enabled for all connections
- [ ]
proxy_read_timeout 86400set for WebSocket routes
Application¶
- [ ] Session cleanup configured (cron or Celery)
- [ ] Health check endpoint added
- [ ] Logging configured for
djust.state_backend - [ ] CSRF protection enabled
- [ ]
SESSION_TTLtuned for your use case
Verification¶
- [ ] Load test with concurrent WebSocket connections
- [ ] Verify state sharing across multiple servers
- [ ] Test reconnection after server restart
- [ ] Monitor Redis memory under load
- [ ] Confirm browser back/forward works after navigation
Troubleshooting¶
Redis connection errors: Verify Redis is running (redis-cli ping), check URL and password, confirm firewall rules.
Session not found after restart: Ensure STATE_BACKEND='redis' and Redis persistence is enabled (save directives in redis.conf).
Memory issues: Increase maxmemory, enable allkeys-lru eviction, reduce SESSION_TTL, run cleanup more frequently.
Serialization errors: Ensure djust version matches across all servers, clear Redis cache, and verify Rust extension is compiled for the correct Python version.