The humble Raspberry Pi is a very versatile thing. A low-cost computer that can become a simple low-end desktop, a low power server or a controller for electronics projects via its numerous GPIO pins. In my case it’s the middle option, I currently have two Raspberry Pis managing various functions on my home network such as:

  • DHCP to assign IP addresses and routing information to devices depending on what their purpose is.
  • CUPS to allow my USB inkjet printer to receive print jobs from any computer on the network.
  • Pi-Hole to block annoying (and potentially malicious) advertisements at network level
  • Cloudflared (a.k.a. Argo Tunnel) to provide a channel for making DNS requests securely over HTTPS.

It’s the latter two that I’ll be focusing on in this post. For the most part this post is based on an existing how-to by Ben Dews, however recently I have been moving services into Docker containers for the purpose of quick disaster recovery (e.g. should the internal SD card fail). In the case of a full rebuild, just install Raspberry Pi OS, install Docker, run the docker-compose script and everything should be back to normal quickly.

Prerequisites

From a fresh install of Raspberry Pi OS (formerly Raspbian), install Docker and docker-compose from the package manager:

$ sudo apt update
$ sudo apt install docker.io docker-compose

Once those have been installed along with their dependencies, we can make a start with creating our docker-compose script.

Creating the Stack

Since this stack will consist of two containers communicating with one-another, it’s better to make use of docker-compose to organise the stack under one roof rather than bringing up and taking down two separate containers.

cloudflared – DNS over HTTPS

So let’s create pihole-doh.yml and firstly define our cloudflared service:

version: "3.5"
services:
  cloudflared:
    image: crazymax/cloudflared:latest
    container_name: cloudflared
    ports:
      - "5053:5053/udp"
      - "49312:49312/tcp"
    environment:
      - "TZ=Europe/London"
      - "TUNNEL_DNS_UPSTREAM=https://1.1.1.1/dns-query,https://1.0.0.1/dns-query"
    restart: always

For those unfamiliar with docker-compose (and I will readily admit I’m still a newcomer to this), this seems like a lot but I’ll break it down.

  cloudflared:
    image: crazymax/cloudflared:latest
    container_name: cloudflared

Firstly we have the image this container will run, in this case I’m using an image of cloudflared created by GitHub user crazy-max. There are a number of alternative images available so if you want to try a different implementation then other options are available. This particular image is the second-most popular image for cloudflared on Docker Hub, with the other created by visibilityspots. The container_name value is just a frendly name we assign to the container, else Docker will randomly generate one.

    ports:
      - "5053:5053/udp"
      - "49312:49312/tcp"

We only need to forward 2 ports here, and given we don’t normally need them anywhere else we can just map them directly:

  • 5053 – The listening port for the DNS-over-HTTPS proxy server
  • 49312 – For metrics if you want to hook the service into a reporting tool such as Prometheus.
    environment:
      - "TZ=Europe/London"
      - "TUNNEL_DNS_UPSTREAM=https://1.1.1.1/dns-query,https://1.0.0.1/dns-query"

The environment section is for your own personal preferences. I live in the UK so I set the timezone accordingly, while the TUNNEL_DNS_UPSTREAM parameter allows you to set your DNS-over-HTTPS provider of choice if you don’t want to use Cloudflare.

    restart: always

Finally, we always want the container to restart in the event of an error, or if the Raspberry Pi reboots.

Verifying cloudflared

We can manually verify the cloudflared service is working by deploying the container and making a DNS request using dig:

$ docker-compose -f "pihole-doh.yml" up -d

Once the container has successfully started we can make a DNS query over port 5053 using dig:

$ dig @127.0.0.1 -p 5053 michaeldodd.net

; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> @127.0.0.1 -p 5053 michaeldodd.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16039
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 27b0cfa3930c9191 (echoed)
;; QUESTION SECTION:
;michaeldodd.net.		IN	A

;; ANSWER SECTION:
michaeldodd.net.	883	IN	A	69.163.157.115

;; Query time: 3 msec
;; SERVER: 127.0.0.1#5053(127.0.0.1)
;; WHEN: Sun Nov 08 12:41:37 GMT 2020
;; MSG SIZE  rcvd: 87

We can verify that the cloudflared container is making this request by using:

$ docker-compose -f "pihole-doh.yml" down

to bring down the container and re-running the dig command. This time it should time out.

$ dig @127.0.0.1 -p 5053 michaeldodd.net

; <<>> DiG 9.11.5-P4-5.1+deb10u2-Debian <<>> @127.0.0.1 -p 5053 michaeldodd.net
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached

Pi-Hole – Network-wide ad-blocking

Time to add our second service:

  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    depends_on: 
      - cloudflared
    network_mode: "host"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
      - "443:443/tcp"
    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: 'superSecurePasswordHonest!'
      ServerIP: '192.168.1.10'
      DNS1: '172.22.0.1#5053'
      DNS2: 'no'
    # Volumes store your data between container upgrades
    volumes:
      - './pihole/etc-pihole/:/etc/pihole/'
      - './pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
    # Recommended but not required (DHCP needs NET_ADMIN)
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
      - NET_BIND_SERVICE
    restart: always

As above, we give our container a friendly name, and this time we’re using an official image provided by Pi-Hole. We want to ensure that this service comes up after cloudflared is ready, and that we can communicate with the cloudflared container:

    depends_on:
       - cloudflared

Next we have our list of ports:

    network_mode: host
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "8081:80/tcp"
      - "443:443/tcp"

A few more this time, and most of these we want to directly map:

  • 53 (TCP and UDP) – DNS
    Listening ports for DNS requests. It’s important to directly map these ports as DNS handling is at the heart of what Pi-Hole does.
  • 67 – DHCP
    Pi-Hole can also act as a DHCP server so it can be beneficial to leave this in. However in my personal setup I have removed this mapping as I am using isc-dhcp-server to handle DHCP requests on my home network.
  • 80 – Web Server
    The web interface for Pi-Hole’s admin console. This one should be safe to map to whatever port you like, for example to avoid conflict if you’re also running a web server.
  • 443 – SSL
    Allows Pi-Hole to catch adverts served up over SSL.

We’re also directly binding Pi-Hole to our physical network insterface by using host network mode as I’ve not been able to get it working in bridged mode. There may well be a reason for that which I’ve not yet discovered – answers on a postcard please!

There are a number of environment variable configuration options available on the Docker Hub page, but for now we’ll only make use of a few.

    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: 'superSecurePasswordHonest!'
      ServerIP: '192.168.1.10'
      DNS1: '172.22.0.1#5053'
      DNS2: 'no'

The TZ value is the same as before albeit formatted differently due to the way the image is set up. This could probably be improved upon by using variable substitution so we only need to change one line to change the timezone for all services.

It’s generally a very bad idea to stick any credentials within a docker-compose file, and docker-compose allows for the importing of secrets. But for the sake of simplicity we’ll use a plain ol’ password here.

As we’re using host mode, supposedly we need to specify the IP of our server here with the ServerIP variable. However I’ve found that Pi-Hole tends to work fine without this, so your mileage may vary.

Finally, we want to configure Pi-Hole to make use of secure DNS requests by ensuring that upstream DNS requests are only routed via our cloudflared service. Therefore we’re sending all upstream DNS queries via the docker container network on port 5053, and not using any additional DNS providers.

    # Volumes store your data between container upgrades
    volumes:
      - './pihole/etc-pihole/:/etc/pihole/'
      - './pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'

These lines create storage directories for Pi-Hole outside of the container, so that configuratons are retained if the container is recreated after an upgrade. These folders will be created in the same directory as the pihole-doh.yml config file.

    # Recommended but not required (DHCP needs NET_ADMIN)
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
      - NET_BIND_SERVICE
    restart: always

Finally, some additional capabilities that Pi-Hole requires should you wish to use it as a DHCP server. Additional details can be found here.

Running and verifying the stack

So your final pihole-doh.yml file should look something like this:

version: "3.5"
services:
  cloudflared:
    image: crazymax/cloudflared:latest
    container_name: cloudflared
    ports:
      - "5053:5053/udp"
      - "49312:49312/tcp"
    environment:
      - "TZ=Europe/London"
      - "TUNNEL_DNS_UPSTREAM=https://1.1.1.1/dns-query,https://1.0.0.1/dns-query"
    restart: always
  
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    depends_on: 
      - cloudflared
    network_mode: host
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
      - "443:443/tcp"
    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: 'superSecurePasswordHonest!'
      DNS1: '172.22.0.1#5053'
      DNS2: 'no'
      ServerIP: '192.168.0.10'
    # Volumes store your data between container upgrades
    volumes:
      - './pihole/etc-pihole/:/etc/pihole/'
      - './pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
    # Recommended but not required (DHCP needs NET_ADMIN)
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
      - NET_BIND_SERVICE
    restart: always

(It should go without saying that you should use a different password, or better still, put your credentials in a different file)

The next thing to do is to run the script from docker-compose:

$ docker-compose -f "pihole-doh.yml" up -d

Providing you don’t have any other services that have already nabbed the ports, both containers should be up and running within a few seconds. You can verfiy this by visiting the IP of your Raspberry Pi in a web browser (e.g. http://192.168.0.10/admin), which should present you with the Pi-Hole admin panel.

Pi-Hole Admin Panel

You can now also verify that your DNS requests are being made over HTTPS by visiting Cloudflare’s ESNI Checker tool. After running the test, the first two columns (Secure DNS and DNSSEC) should both be green. The latter two (TLS 1.3 and Encrypted SNI) are browser-based features so fall outside the scope of this post.

Cloudflare ESNI test results showing secure DNS

The last thing to do is to ensure that all devices in your network are using your Raspberry Pi’s IP address as its DNS server. This can be done via the DHCP settings on your router or DHCP server, or manually on each device.

… or you can run this locally

During the process of verifying these steps on a Debian virtual machine, I found that I could run this stack locally, configuring the DNS server on my network connection to point to 127.0.0.1. This means that when used in combination with a VPN this stack could provide an extra layer of security when using public WiFi networks for example, as well as blocking annoying adverts without the need for a browser extension.

While I’ve locally tested this under Debian linux, I’ve not been able to verify this as working under Mac OS or Windows. I will update this post with my findings as and when I’m able to check.

UPDATE February 2022 – I’ve revisited this, and by setting the value of DNS1 to point at the Docker container network (172.22.0.1) rather than localhost, I now have DNS-over-HTTPS running locally on my MacBook. I have updated this post to reflect this.