« Back to home

An updated look at the BigDino web stack

An updated look at the BigDino web stack

It's been some time since I've done a good ol' infrastructure post, and the Bigdinosaur.org web stack has evolved a bit over the course of 2018. We're still using HAProxy, Varnish, and Nginx, but the way these applications connect and how they communicate is very different from my 2017-era config. Let's dive in!

The front line: HAProxy

HAProxy is a layer 7-aware reverse proxy and load balancer. It sits at the very top of the web stack and it's the thing you as a visitor first interact with. I'm using HAProxy primarily for SSL termination for all of the sites hosted on the BigDino web server—in other words, whether you're connecting to Fangs, the Chronicles of George, this blog, or whatever else I host, you're talking to HAProxy first.

The HAProxy configuration I'm using is as follows:

	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats timeout 30s
	user haproxy
	group haproxy
	nbthread 4
	tune.ssl.cachesize 1000000

	ca-base [redacted]
	crt-base [redacted]

	ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
	ssl-dh-param-file [redacted]/dhparam.pem

	log	global
	mode    http
	option  tcpka
	option  dontlognull
	option  httplog
	option  tcp-smart-connect
	option  splice-auto
	timeout connect 5000
	timeout client  50000
	timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

frontend unifiedfront
	bind *:80,:::80 v6only tfo
	## HTTP to HTTPS redirect
	acl plain ssl_fc,not
	http-request redirect scheme https if plain
	bind *:443,:::443 v6only tfo ssl crt [several certificates redacted] ecdhe secp384r1 alpn h2,http/1.1
	default_backend tovarnish

backend tovarnish
	## Set real IP if cloudflare
	acl cloudy src 2400:cb00::/32 2405:b500::/32 2606:4700::/32 2803:f800::/32 2c0f:f248::/32 2a06:98c0::/29	
	http-request set-header X-Client-IP %[req.hdr(CF-Connecting-IP)] if cloudy
	http-request set-header X-Client-IP %[src] if !cloudy
	http-request add-header X-Forwarded-Proto https
	server varnish /dev/varnish-listen.sock check

(If you're curious about any of the specific config options, you can look them up in the HAProxy documentation.)

Beyond SSL termination, HAProxy is also listening on TCP port 80 for regular HTTP requests, and redirecting them to HTTPS bef0re they get down to the web server.

As of version 1.9, HAProxy support downstream communication via unix domain sockets, and one of my goals with the latest configuration was to go UDS end-to-end for the entire stack. This presents some configuration challenges, since going UDS means that you have to be particularly observant about keeping track of client IP addresses.

Along those lines, observant readers might notice that the backend configuration section has an extra ACL and does some header voodoo. I want to specifically capture the client's real IP address so that the lower layers of the stack can see it (and, where appropriate, log it). Rather than trusting a provided X-Forwarded-For header, I'm setting my own.

Because BigDino uses Cloudflare as a CDN for a couple of sites, I'm extracting the contents of the Cloudflare-provided CF-Connecting-IP header on traffic that originates from Cloudflare's IP addresses; for traffic originating everywhere else, I'm setting the IP address to what HAProxy sees as the client's source. The reason for doing this will be explained more thoroughly in the next section, but the quick explanation is that Varnish has changed its X-Forwarded-For behavior and I'm working around that change.

But why have a distinct layer for SSL termination? Why not let this happen at the web server layer like most howtos recommend?

Ahhh, that is indeed a good question. The answer is that by sticking an SSL terminator at the very top of the stack, I can also use a caching layer.

Gimme the cache: Varnish

HAProxy sends traffic down the stack to Varnish, a fast caching reverse proxy. Varnish holds objects in RAM and serves them up much faster than the actual web servers and applications running at lower layers in the stack, and makes a huge overall contribution to site performance.

But Varnish can't (easily) cache encrypted traffic, because encrypted traffic looks like random garbage. Traditionally, if you wanted to use cache, you had to either forego encryption or get creative; I chose to get creative and simply break the web stack into layers so that I could have SSL for everything and also cache for everything.

Varnish's configuration looks like this:

# Combined VCL for all hosts on this server

# We're using unix sockets, so we need to declare that we're using VCL 4.1
vcl 4.1;

# Backend definition
backend default {
	.path = "/var/run/nginx-default.sock";
	.connect_timeout = 600s;
	.first_byte_timeout = 600s;
	.between_bytes_timeout = 600s;
	.max_connections = 800;

# HTTP/2 backend
backend h2 {
	.path = "/var/run/nginx-h2.sock";
	.connect_timeout = 600s;
	.first_byte_timeout = 600s;
	.between_bytes_timeout = 600s;
	.max_connections = 800;

# Import Varnish Standard Module so I can serve custom error pages
import std;

sub vcl_recv {

	# Because we're using unix domain sockets for basically everything, we  
	# need to work around Varnish's default behavior of appending "client.ip" 
	# (which will always be with UDS) to X-Forwarded-For. Since 
	# HAProxy is sending us the real IP in X-Client-IP and it's vetted against
	# Cloudflare's IP list, we can just toss whatever's in X-Forwaded-For
	# and re-set it from the known-good X-Client-IP. Whew.
	unset req.http.X-Forwarded-For;
	set req.http.X-Forwarded-For = req.http.X-Client-IP;

	if (req.method == "POST") {
		return (pass);

	if (req.http.upgrade ~ "(?i)websocket") {
		return (pipe);

	# Send HTTP/2 requests to the proper backend
	if (req.http.protocol ~ "HTTP/2") {
		set req.backend_hint = h2;
	else {
		set req.backend_hint = default;

	# No PHP for Bigdino, Fangs, and CoG
	if (req.http.host ~"(chroniclesofgeorge.com|bigdinosaur.org|fangs.ink)") {
		if (req.url ~ "\.php(\?.*)?$") {
			return (synth(700));

	# Cache only static assets in Discourse assets dir & pass everything else
	if (req.http.host ~"discourse.bigdinosaur.org") {
		if (!(req.url ~ "(^/uploads/|^/assets/|^/user_avatar/)" )) {							  
			return (pass);

	# Ignore traffic to Ghost blog amin stuff
	if (req.http.host ~"blog.bigdinosaur.org") {
		if (req.url ~ "^/(api|signout)") {
			return (pass);
		elseif (req.url ~ "^/ghost" && (req.url !~ "^/ghost/(img|css|fonts)")) {
			return (pass);

	# Remove cookies from things that should be static, if any are set
	if (req.url ~ "\.(png|gif|jpg|swf|css|js|ico|css|js|woff|ttf|eot|svg)(\?.*|)$") {
		unset req.http.Cookie;
		return (hash);
	if (req.url ~ "^/images") {
		unset req.http.cookie;
		return (hash);

	# Remove Google Analytics and Piwik cookies so pages can be cached
	if (req.http.Cookie) {
		set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", "");
		set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_pk_(ses|id)[\.a-z0-9]*)=[^;]*", "");
	if (req.http.Cookie == "") {
		unset req.http.Cookie;

sub vcl_pass {
	set req.http.connection = "close";

sub vcl_pipe {
	if (req.http.upgrade) {
		set bereq.http.upgrade = req.http.upgrade;

sub vcl_backend_response {
	set beresp.http.x-url = bereq.url;
	set beresp.http.X-Host = bereq.http.host;

	# Strip cookies before static items are inserted into cache.
	if (bereq.url ~ "\.(png|gif|jpg|swf|css|js|ico|html|htm|woff|eof|ttf|svg)$") {
		unset beresp.http.set-cookie;
	if (bereq.http.host ~ "www.chroniclesofgeorge.com") {
		set beresp.ttl = 1008h;
	else {
		if (beresp.ttl < 24h) {
			if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") {
				set beresp.ttl = 60s;
			else {
				set beresp.ttl = 24h;

sub vcl_deliver {

	# Display hit/miss info
	if (obj.hits > 0) {
		set resp.http.X-Cache = "HIT";
	else {
		set resp.http.X-Cache = "MISS";
	# Remove the Varnish header
	unset resp.http.X-Varnish;
	unset resp.http.Via;
	unset resp.http.X-Powered-By;
	unset resp.http.Server;

	# HTTP headers for all sites
	set resp.http.X-Are-Dinosaurs-Awesome = "HELL YES";
	set resp.http.Server = "on fire";
	set resp.http.X-Hack = "don't hack me bro";
	set resp.http.Referrer-Policy = "strict-origin-when-cross-origin";
	set resp.http.Strict-Transport-Security = "max-age=31536000; includeSubDomains; preload;";
	set resp.http.X-Content-Type-Options = "nosniff";
	set resp.http.X-XSS-Protection = "1; mode=block";
	set resp.http.X-Frame-Options = "DENY";
	set resp.http.Expect-CT = {"Expect-CT: max-age=0; report-uri="https://bigdino.report-uri.io/r/default/ct/reportOnly""};

	# Site-specific HTTP headers
	if (req.http.host ~ "fangs.ink" ) {
		set resp.http.Content-Security-Policy = "default-src https:; img-src 'self' https: data:; object-src 'none'; script-src 'self' https://analytics.bigdinosaur.net https://ajax.googleapis.com 'unsafe-inline'; font-src 'self'; upgrade-insecure-requests; frame-ancestors 'none'";

	if (req.http.host ~ "www.bigdinosaur.org" ) {
		set resp.http.Content-Security-Policy = "default-src https:; img-src 'self' https: data:; object-src 'none'; script-src 'self' https://analytics.bigdinosaur.net https://ajax.googleapis.com 'unsafe-inline'; font-src 'self'; upgrade-insecure-requests; frame-ancestors 'none'";

	if (req.http.host ~ "blog.bigdinosaur.org" ) {
		set resp.http.Content-Security-Policy = "default-src https:; style-src 'self' 'unsafe-inline' https://maxcdn.bootstrapcdn.com/; img-src 'self' https: data:; object-src 'none'; script-src 'self' https://analytics.bigdinosaur.net https://code.jquery.com/ 'unsafe-inline' 'unsafe-eval' ; font-src 'self' https://maxcdn.bootstrapcdn.com/; upgrade-insecure-requests; frame-ancestors 'none'";

	if (req.http.host ~ "www.chroniclesofgeorge.com" ) {
		set resp.http.Content-Security-Policy = "default-src https:; style-src 'self'; img-src 'self' https: data:; object-src 'none'; script-src 'self' https://analytics.bigdinosaur.net https://ajax.googleapis.com 'unsafe-inline'; font-src 'self'; upgrade-insecure-requests; frame-ancestors 'none'";

	# Remove custom error header
	unset resp.http.MyError;
	return (deliver);

sub vcl_synth {

	if (resp.status == 700) {
		set resp.status = 404;
		set resp.reason = "Not Found";
		synthetic ( {"Ain't no PHP on this server so fuck off with that"});

	if (resp.status == 405) {
		set resp.http.Content-Type = "text/html; charset=utf-8";
		set resp.http.MyError = std.fileread("/var/www/error/varnisherr.html");

Before we break that down, though, I've also modified Varnish's startup options so that it listens on a unix socket rather than a TCP port. Since I'm on Ubuntu 16.04 server, that means setting a systemctl override, which you can do with the following command:

$ sudo systemctl edit varnish.service

This brings up an empty nano edit window. I've added the following three lines, which unset the Varnish service's ExecStart parameter and replace it with a new line specifying a listen socket and enabling HTTP/2:

ExecStart=/usr/sbin/varnishd -a /var/lib/haproxy/dev/varnish-listen.sock,group=haproxy,mode=660 -T localhost:6082 -f /etc/varnish/default.vcl -S [redacted] -s malloc,1G -p feature=+http2

The one wrinkle here is that HAProxy starts itself in a chroot jail, and in order to use UDS, I have to have Varnish create its listen socket inside of HAProxy's chroot jail and also make sure the socket is created with appropriate permissions. (Figuring out this specific issue took me a lot longer than it should have.)

Now, looking at the Varnish VCL, you'll notice it starts off with two back-ends—this is how I'm supporting HTTP/2 down through the stack. The first backend is for non-HTTP/2 traffic and the second is for HTTP/2. Varnish makes the call on what traffic to send where with this bit of code in my vcl_recv sub:

# Send HTTP/2 requests to the proper backend
	if (req.http.protocol ~ "HTTP/2") {
		set req.backend_hint = h2;
	else {
		set req.backend_hint = default;

The rest of the config is mostly unremarkable. I've elected to use Varnish rather than HAProxy to set global and site-specific HTTP headers, since it's just a hell of a lot easier to do it at the Varnish level. For the Expect-CT certificate transparency header, I'm leaning on Scott Helme's invaluable Report URI service.

Since I'm also (finally) not running PHP applications on the BigDino server (with the exception of Matomo, which has its own analytics-specific domain, I'm also capturing any requests for PHP files and serving up a fast Varnish-synthesized 404 message. This shifts the load of answering skr1pt k1dd13s' incessant PHP bot requests into the cache layer, where there's effectively no penalty to synthesizing up a fast error message.

Nginx under all

Below Varnish sits the actual web server: Nginx. The Nginx main configuration is mostly unremarkable:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 1024;
	use epoll;
	multi_accept on;

http {
	# Basic Settings
	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	types_hash_max_size 2048;
	server_tokens off;
	port_in_redirect off;
	server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	## Legacy SSL settings
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;
	ssl_session_cache shared:SSL:10m;
	ssl_buffer_size 4k;

	# Logging Settings
	log_format show_hosts '[$time_local] $http_x_forwarded_for - $server_name: $request $status Referrer: "$http_referer" UA: "$http_user_agent"';
	access_log /var/log/nginx/access.log show_hosts;
	error_log /var/log/nginx/error.log error;

	# Gzip Settings
	gzip on;
	gzip_disable "msie6";
	gzip_min_length 1100;
	gzip_vary on;
	gzip_proxied any;
	gzip_buffers 16 8k;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/  svg+xml application/x-font-ttf font/opentype application/vnd.ms-fontobject application/javascript; 

	# Security stuff for size limits and buffer overflows
	client_max_body_size  0; 
	client_body_timeout   10;
	client_header_timeout 10;
	keepalive_timeout     10 10;
	send_timeout          10;

	# Websocket compatibility
	map $http_upgrade $connection_upgrade {
		default Upgrade;
		''      close;
	map $http_x_forwarded_proto $thescheme {
		default $scheme;
		https https;

	# GeoIP stuff
	geoip_country /usr/local/share/GeoIP/GeoIP.dat;
	geoip_city /usr/local/share/GeoIP/GeoLiteCity.dat;

	# Virtual Host Configs
	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;

	server_names_hash_bucket_size 128;

The only thing I have in the conf.d directory is a variable definition for php-fpm to make vhost PHP configurations a little easier to read:

upstream php7-fpm-sock {
	server unix:/var/run/php/php7.3-fpm.sock;

There are a bunch of vhosts running on the server, all of which share similar configurations. The vhost file for this blog, for example, looks like this:

server {
	server_name  blog.bigdinosaur.org;
	listen unix:/var/run/nginx-default.sock;
	listen unix:/var/run/nginx-h2.sock http2;
	root /var/www-10ghost/;
	index index.js index.html;
	autoindex off; 

	location = /.well-known/security.txt { allow all; }
	location = /.well-known/security.txt.sig { allow all; }

	location / {
		proxy_pass http://localhost:2368/;
		proxy_set_header Host $host;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection "Upgrade";

The important bits here are the two listen sockets; the second one, with HTTP/2 enabled, is the one that gets used for pretty much everything. Nginx fortunately doesn't need to be in charge of SSL/TLS in order for it to speak HTTP/2 to clients, which is how this blog shows up as fully HTTP/2.

Annoyingly, Ghost's support for UDS is a little fiddly, so I still have to use a TCP port to proxy blog requests. I hope at some point in the future that the Ghost devs will implement proper UDS support, but I understand it's not a huge priority.

And that's the BigDino web stack, as of the beginning of 2019!

Discuss this post on the BigDinosaur forums