blog.selfassembled.org

Too much about DNS (including Pi-Hole, Mullvad and blocking)

These days, with the amount of shit that connects to our wifi networks, we can’t be sure that everything is going to follow the instructions we’ve given it, especially given how many of these devices might want to phone home, load ads, track where they are etc.

The best way to prevent this is to not buy these devices. Nah, we’re not digital Luddites, but we can take some protections here to prevent the most egregious of these activities. At home I’ve got the following set up:

  • Pi-Hole
  • cloudflared to route DNS over HTTPS to Mullvad
  • re-routing all DNS queries to the Pi-Hole
  • blocking DNS over HTTP and TLS for all clients except the Pi-Hole

This prevents things from having hard-coded DNS services that they talk to by forwarding all DNS requests over port 53 to my Pi-Hole, blocks DNS over TLS (port 853) to a specific set of hosts and blocks port 443 access to a specific set of hosts.

Cloudflared to Mullvad

The first thing I set up was the cloudflared tool to proxy local DNS requests over HTTPS to the Mullvad DNS server. I’m using Mullvad because they’ve proven to be a pretty champion of user privacy. You should follow the instructions on the cloudflared page to install it. I’m running this on Debian, so I just downloaded the .dev package and did a standard dpkg -i cloudflared*.deb.

The instructions tell you to pass the arguments --port 5053 --upstream https://dns.mullvad.net/dns-query to cloudflared, but like, how is it going to resolve dns.mullvad.net if dns.mullvad.net is the only DNS server its allowed to speak with. We can’t use the IP address of the Mullvad server either since the TLS certificate is keyed to the domain name, so that’ll throw an error.

The answer to this problem is /etc/hosts – add the IP address for dns.mullvad.net to your file:

echo "194.242.2.2 dns.mullvad.net" | sudo tee -a /etc/hosts

If you’re running this on an LXC container on Proxmox you’ll also need to do a quick touch /etc/.pve-ignore.resolv.conf so that Proxmox doesn’t overwrite this file if you restart it.

Now that we’ve got that squared away we can create a new user for the cloudflared daemon (as it certainly does not need to run as root):

sudo useradd -s /usr/sbin/nologin -r -M cloudflared

This sets the shell to nologin so you can’t, well, login, marks the account as system account with -r and doesn’t make a home directory with -M.

Then we can create a systemd file for this:

[Unit]
Description=cloudflared DNS over HTTPS proxy
After=syslog.target network-online.target

[Service]
Type=simple
User=cloudflared
ExecStart=/usr/local/bin/cloudflared proxy-dns --port 5053 --upstream https://dns.mullvad.net/dns-query
Restart=on-failure
RestartSec=10
KillMode=process

[Install]
WantedBy=multi-user.target

With a quick systemctl enable cloudflared && systemctl start cloudflared we can get this service stared. We can confirm this is working via dig:

# dig @127.0.0.1 -p 5053 mullvad.net

; <<>> DiG 9.18.18-0ubuntu0.23.04.1-Ubuntu <<>> @127.0.0.1 -p 5053 mullvad.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57822
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 06afebf0e4d19267 (echoed)
;; QUESTION SECTION:
;mullvad.net.                   IN      A

;; ANSWER SECTION:
mullvad.net.            60      IN      A       45.83.223.209

;; Query time: 88 msec
;; SERVER: 127.0.0.1#5053(127.0.0.1) (UDP)
;; WHEN: Sat Feb 10 23:46:40 UTC 2024
;; MSG SIZE  rcvd: 79

Now that that is setup, we can install Pi-Hole by following the instructions on the Pi-Hole site, and during configuration we can tell Pi-Hole that 127.0.0.1#5053 is our upstream DNS provider.

Setting up TLS for Pi-Hole

Technically this has nothing to do with the rest of the DNS setup, but setting up a TLS cert via Lets Encrypt for our Pi-Hole will be nice since it’ll stop our browser from yelling at us about entering a password over a non-secure connection. With the DNS challenge type we won’t have to expose the server to the internet at large, and we can use the Pi-Hole’s own local DNS service to make it accessible via the custom domain name.

I use Porkbun for my domain names which is nice since it has an API for making changes, and that API has been implemented in a wide number of ACME providers, including lego, which we’ll be using here. Download the binary from the releases page and issue yourself a certificate:

sudo mkdir -p /etc/{certs,secrets}
sudo PORKBUN_SECRET_API_KEY=xxx PORKBUN_API_KEY=xxx lego --email [your email] --domains="pihole.yourawesomedomainname.com" --dns porkbun --path /etc/certs --pem run

If you’re not using porkbun, there’s a massive list of other providers that are supported. We’re using the --pem argument since LightHTTPd requires a single concatenated cert + key and so rather than having to generate that manually, we’ll let the tool do it for us.

We also want to set this up for renewal so we can make a new system account to run the process, put the secrets in the filesystem and then lock down access to that user for those secrets, and the certs so that that user and the www-data user for LightHTTPd can read them.

sudo useradd -s /usr/sbin/nologin -r -M lego
echo "PORKBUN_SECRET_API_KEY=xxx" | sudo tee -a /etc/secrets/porkbun > /dev/null
echo "PORKBUN_API_KEY=xxx" | sudo tee -a /etc/secrets/porkbun > /dev/null
echo "EMAIL=xxx" | sudo tee -a /etc/secrets/porkbun > /dev/null
echo "DOMAIN=xxx" | sudo tee -a /etc/secrets/porkbun > /dev/null
sudo chown lego:lego /etc/secrets/porkbun
sudo chown -R lego:www-data /etc/certs
sudo chmod -R 750 /etc/certs

Now we can create a systemd service file in /etc/systemd/system/lego-acme.service:

[Unit]
Description=Renew tls cert

[Service]
Type=oneshot
User=lego
EnvironmentFile=/etc/secrets/porkbun
ExecStart=/usr/local/bin/lego --email $EMAIL --dns porkbun --pem --path /etc/certs --domains=$DOMAIN renew --renew-hook="/usr/local/bin/cert-renew.sh"
PrivateTmp=true
WorkingDirectory=/etc/certs

And a timer in /etc/systemd/system/lego-acme.timer:

[Unit]
Description=Renew certs

[Timer]
Persistent=true
OnCalendar=monthly

[Install]
WantedBy=timers.target

This will check for a renewal once a month. You’ll want to enable and start this timer: systemctl enable lego-acme.timer && systemctl start lego-acme.timer

I also added a renew hook to the script to restart lighttpd after the cert has been renewed:

#!/bin/sh
systemctl restart lighttpd

Routing DNS requests internally

Now, because I didn’t want any thing to use any other hard-coded DNS, I set up my router (which uses pfSense) so that all requests to port 53 are routed to the Pi-Hole (but the Pi-Hole is exempt from this), and then set it to block any requests over ports 853 (DNS over TLS) and 443 to any host that exists on the public dns nameserver list, again, exempting the pihole from this requirement.

We’ll start off by going to the Firewall > NAT > Port Forward section in pfSense and create two new rules.

Rule to redirect all DNS requests over port 53 back to the Pi-Hole (which lives at 192.168.1.10 in my local network):

Rule to allow the Pi-Hole to make requests over port 53:

The second rule for the Pi-Hole to be allowed to do this needs to be above the one blocking everything.

Now, when anything makes a request to any DNS service, it’ll be forwarded to your Pi-Hole instead.

I also wanted to block DNS over TLS and DNS over HTTPS because you know some shitty company is gonna figure out how to force you to see ads by resolving the servers over a TLS connection, so we’ll just nip that one in the bud by blocking all of this. My network, my rules. The first thing we need to do is get a list of the public services that provide DNS over HTTP/TLS so we can block requests to them. (And yes, this is a cat and mouse game, but isn’t all ad-blocking?).

Go Firewall -> Aliases -> URLs and create a new alias

And then go to Firewall -> Rules -> LAN and create another two rules:

In these rules we can create an invert match so they say anything that isn’t our Pi-Hole is unable to make connections over ports 853 or 443 to the servers on this list. I’ve been running this rule for 2 1/2 years and, aside from my work VPN, have not run into a problem. (I have additional rules that allow my work laptop, which has a static IP internally, to access ths correct DNS services for my job. This is an exercise left to the reader.)

This post has been updated to fix the position of the command renew and to change Environment to EnvironmentFile in the service file for the renewal, and to fix the WantedBy section in the systemd timer.

Tags:

Cross-compiling Rust

At $dayjob my team writes a bunch of CLI tools in Rust (and a few various tools in Go) that we need to run in a wide variety of environments:

  • x86-64 for Linux (GNU libc), MacOS and Windows (ideally with MinGW)
  • aarch64/arm64 for Linux (GNU libc) and MacOS

For a long time this has meant we spin up a whole bunch of CI machines, one for each platform, and compile the tools natively for that platform. However, this is not the best option available for us. Since we manage our own runner pools (using GitHub Actions), this means we need to maintain sets of these machines, and as far as MacOS and Windows we’re relying on the public runner pool infrastructure, which is both more costly and disallows us from accessing internal company systems. (This has started to become a problem as we need to access Rust crates which are published internally to build some of these tools!). However, Rust and Go are supposed to be pretty good at handling cross-compiling, so lets give this a shot!

We already “cross-compile” some of our work, as we write HTTP Proxy filters in Rust and compile them to WASM based on the proxy-wasm spec. So, my naïve first attempt was as simple as installing the Rust toolchain for a given platform triple (why it’s a triple when there are more than three items in sometimes?) and give it a go. (I’m running this all on an x86_64 Debian machine, so package names will be specific to that platform.)

rust up target add x86_64-pc-windows-gnu
cargo build ---target x86_64-pc-windows-gnu

error: failed to run custom build command for `ring v0.17.5`

Caused by:
  process didn't exit successfully: `/root/top-secret-work-project/target/debug/build/ring-9e2d74aa803932bf/build-script-build` (exit status: 1)
  --- stdout
  # output elided...
  running: "x86_64-w64-mingw32-gcc" "-O0" "-ffunction-sections" "-fdata-sections" "-gdwarf-2" "-fno-omit-frame-pointer" "-m64" "-I" "include" "-I" "/root/top-secret-work-project/target/x86_64-pc-windows-gnu/debug/build/ring-8250d53ba97b24ed/out" "-Wall" "-Wextra" "-fvisibility=hidden" "-std=c1x" "-pedantic" "-Wall" "-Wextra" "-Wbad-function-cast" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wnested-externs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wstrict-prototypes" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-g3" "-DNDEBUG" "-o" "/home/todd/top-secret-work-project/target/x86_64-pc-windows-gnu/debug/build/ring-8250d53ba97b24ed/out/crypto/curve25519/curve25519.o" "-c" "crypto/curve25519/curve25519.c"

  --- stderr


  error occurred: Failed to find tool. Is `x86_64-w64-mingw32-gcc` installed?

Oh no! What’s this “Is x86_64-w64-mingw32-gcc installed?” Wait, why are we calling gcc? I thought this was Rust?!

Well, it is Rust, but it looks like we’re actually compiling some crypto library written in C and then accessing it from Rust. So our Rust toolchain needs to be able to invoke a functional C compiler for our target platform. Well, clang is supposed to be cross-platform out of the box right? Lets give that a shot, and tell cargo that we want to use clang as our C compiler.

CC=clang cargo build --target x86_64-pc-windows-gnu

# bunch of log lines omitted

  cargo:warning=In file included from crypto/curve25519/curve25519.c:22:
  cargo:warning=In file included from include/ring-core/mem.h:60:
  cargo:warning=In file included from include/ring-core/base.h:64:
  cargo:warning=In file included from /usr/lib/llvm-15/lib/clang/15.0.7/include/stdint.h:52:
  cargo:warning=/usr/include/stdint.h:26:10: fatal error: 'bits/libc-header-start.h' file not found
  cargo:warning=#include <bits/libc-header-start.h>
  cargo:warning=         ^~~~~~~~~~~~~~~~~~~~~~~~~~
  cargo:warning=1 error generated.

OK! Well clang certainly does invoke and there’s no more complaints about missing a gcc compiler, but we are missing some standard libraries, which is going to be a common theme if you’re not compiling pure-Rust related software. We could try to figure out to install just the support libraries for our target system, but there are a bunch of packages that supply a full cross-platform toolchain. So lets stop messing around and just install the mingw-w64 package from Debian (which, if you’ll note, has gcc-mingw-w64 listed as a dependency).

apt install mingw-w64
cargo build --target x86_64-pc-windows-gnu
#[lots of compiling going on]

todd@cross:~/top-secret-work-project# file target/x86_64-pc-windows-gnu/debug/top-secret-work-project.exe
target/x86_64-pc-windows-gnu/debug/.exe: PE32+ executable (console) x86-64, for MS Windows, 21 sections

OMG. That one was really easy – once we got the right toolchain installed. Lets try something a little closer to home and see if we can aarch64 for Linux. If we try that old clang trick again, we’ll see we’re missing support libraries for that target as well. Unfortunately these packages aren’t all named similarly or this would be easier, but we’ll search the packages for bookworm for aarch64 gcc and we’ll find out there is a gcc-aarch64-linux-gnu package. So lets install that and see what we get!

apt install gcc-arch64-linux-gnu
cargo build --target aarch64-unknown-linux-gnu
#[again there is a lot of compiling]

          /usr/bin/ld: /root/top-secret-work-project/target/aarch64-unknown-linux-gnu/debug/deps/frontdoor_ops-89e314a14d73e562.105y1p0cy3ffj42o.rcgu.o: error adding symbols: file in wrong format
          collect2: error: ld returned 1 exit status

Well, you should have known better when I ended that previous sentence so optimistically that it wasn’t going to work right off the bat! It looks like our linker ld is not the proper linker for this platform. Why Cargo can figure out to tell rustc to use the proper c compiler for the arch, but not the proper linker is beyond me, but we can actually just tell Cargo to tell rustc which linker to use with RUSTFLAGS="-Clinker=[path to linker]". Thankfully when we installed the aarch64 cross compile toolchain, we also got a proper linker for that platform, aarch64-linux-gnu-ld, so lets give this a shot.

RUSTFLAGS="-Clinker=aarch64-linux-gnu-ld" cargo build --target aarch64-unknown-linux-gnu
#[compiler nonsense]
  = note: aarch64-linux-gnu-ld: cannot find -lgcc_s: No such file or directory

I have no clue why this happens here, but the problem is that it’s trying to find libgcc_s.so and its unable to find it because it’s not installed in the normal system library search path. Again, why it’s able to figure out the compiler but nothing else is annoying, but we can solve this with another flag passed in via RUSTFLAGS: -L [path to directory]. And, again, when we installed the proper toolchain we actually got these files. On Debian they’re in /usr/lib/gcc-cross/aarch64-linux-gnu/12/, so we’ll try this again!

RUSTFLAGS="-Clinker=aarch64-linux-gnu-ld -L /usr/lib/gcc-cross/aarch64-linux-gnu/12/" cargo build --target aarch64-unknown-linux-gnu
#[again a lot of messages]
todd@cross:~/top-secret-work-project# file target/aarch64-unknown-linux-gnu/debug/top-secret-work-project
target/aarch64-unknown-linux-gnu/debug/top-secret-work-project: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, with debug_info, not stripped

Oh my! Would you look at that! We’ve got three of our target platforms so far, the last one must be really easy right?

[laughs in bsd]

Well, sadly, no. This is where things get really complex. We want to compile for MacOS, but like there’s no MacOS toolchain for Debian (or any other linux as far as I can tell?) Thankfully this is where the community comes in with osxcross which is a set of tools for extracting and building a valid toolchain for cross-compiling for MacOS from Linux and BSD! You’ll need an Apple account for this and you’ll need to ensure you’re using your software under the terms of the license Apple provides for it’s SDKs. (I am not a lawyer, but I’m pretty sure if you’re building software designed to run on their computers with their SDK, that’s kind of the point.)

Follow the instructions on that project to build your toolchain. It will take some time. (It’s OK, I’ll be here when you get back.)

OK?

OK. Done! Great job!

We’ve now got a lot of tools with a lot of really long names, and thankfully we’ve already learned how to provide alternatives to rustc and cargo, however, we’re going to need to provide a few more to make this work properly. And since we have to override both the C compiler and the linker, we’ll actually alter the path here too so we don’t have to type it out several times.

PATH=[path to osx sdk]/bin:$PATH LD_LIBRARY_PATH=[path to osx sdk]/lib:$LD_LIBRARY_PATH CC=x86_64-apple-darwin22.4-clang RUSTFLAGS="-Clinker=x64_64-apple-darwin22.4-clang -Clink-arg=-undefined -Clink-arg=dynamic_lookup" cargo build --target x86_64-apple-darwin
#[again with the compiling]
todd@cross:~/top-secret-work-project# file target/x86_64-apple-darwin/debug/top-secret-work-project
target/x86_64-apple-darwin/debug/top-secret-work-project: Mach-O 64-bit x86_64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE|HAS_TLV_DESCRIPTORS>

You might notice a few things here: we added an LD_LIBRARY_PATH – this is similar to the -L we passed into the RUSTFLAGS section before, because, once again, we have an entire set of library files that we need to link with for the platform. We also passed in clang as both our compiler and our linker because it works. I don’t make the rules, but it’s pretty easy this way at least. Finally we also passed in some specific link-arg flags as well – these get passed to the linker as additional flags for it. The cool part about the MacOS stuff is that the SDK we build with osxcross has both aarch64 AND x86_64 binaries in it, so we just change the name of the C compiler and linker here to make an aarch64 version of this library:

PATH=/root/macos-13.4/bin:$PATH CC=aarch64-apple-darwin22.4-clang LD_LIBRARY_PATH=/root/macos-13.4/lib:$LD_LIBRARY_PATH RUSTFLAGS="-Clinker=aarch64-apple-darwin22.4-clang -Clink-arg=-undefined -Clink-arg=dynamic_lookup" cargo build --target aarch64-apple-darwin
#[come on compile!]
todd@cross:~/top-secret-work-project# file target/aarch64-apple-darwin/debug/top-secret-work-project
target/aarch64-apple-darwin/debug/top-secret-work-projects: Mach-O 64-bit arm64 executable, flags:<NOUNDEFS|DYLDLINK|TWOLEVEL|PIE|HAS_TLV_DESCRIPTORS>

And there we go, our target/ directory we’ve now got arch64-apple-darwin, aarch64-unknown-linux-gnu, debug, x86_64-apple-darwin, x86_64-pc-windows-gnu (where debug is our native target).

If you wanted to compile on aarch64 linux, you’d need the gcc-x86-64-linux-gnu package instead, but the rest of the targets and instructions should remain the same.

Tags:

Cross-compiling Go (and a follow up about Rust)

Observant readers may have noticed that the URL for the previous post included a go in part of it; I had originally intended to cover this at the end of the previous post. Given how long that post ended up being, here’s the follow-up.

We also have a small amount of Go software to maintain, namely, and the reason this is important, a Fluent-Bit output plugin. This plugin needs to be compiled to a shared C library, so again, like our Rust issue being related to the fact that we are consuming undelying C libraries, with this we are emitting a C library, which means we aren’t able to use the Go cross-platform functionality as is included out of the box.

This also dovetails nicely with someone pointing out that we could have just used cargo-zigbuild to handle our cross-platform tooling. While I chose to figureout the issues by hand, partly to understand what is required to make all of this work, but also partly to understand how you can interact with the underlying compiler invoked by Cargo, this tool exists and looks to solve many of these issues as well.

This tool relies on the fact that Zig includes a complete C and C++ compiler in its toolset, which, like Clang, is already cross-platform aware. (Interestingly they have a specific workaround to deal with gcc_s replacing it with libunwind. Unfortunately replacing specific flags (or omitting them) with the linker invoked via Rust seems to be only possible if you dive into the world of build scripts).

This fact about Zig is pretty cool (and in fact one of the earlier attempts I made for Rust substituted zig cc for clang and ld as well). This is also incredibly useful when trying to get Go to output a shared library for a different platform.

Usually when you cross-compile Go for a different arch or platform you use GOOS and GOARCH to control what it’s outputting. If you’re not using CGO at all, this is pretty cool. However, if you’re using CGO, you’re gonna have to do some work. Luckily, just like with Rust we can override the C compiler we want to invoke:

CGO_ENABLED=1 GOARCH=amd64 GOOS=linux CC="zig cc -target aarch64-linux-gnu" go build -buildmode=c-shared -o top-secret-go-project

Thankfully what we’re building here has miminal requirements so we’re not messing around with linking or anything else.a

Tags:

Custom container templates in Proxmox

After finally purchasing some hardware that is powerful enough to run many things at once (to replace my aging set of raspberries pi and my lenovo thin clients), I figured I’d also switch over from using a mix of running-on-bare-metal and everything else in very long single Docker compose file and install proxmox because apparently I have no self-worth and like to make things complicated.

While I’m waiting for the rest of the hardware to arrive (namely an SSD to stick in the server), I figured I’d mess around with proxmox and create some lxc containers. After making one or two I noticed that there a few tools that I would like by default installed that aren’t in the pre-build images, and don’t exist in the other tool images they provide, namely:

  • avahi for zeroconf dns
  • tailscaled for easier remote management

So I realized I needed to look into creating a new container template. Unfortunately, like most things even moderately difficult, search engines are a complete failure when trying to find this, however I was able to cobble together what is going on here from a variety of proxmox forum and reddit posts, along with the docs for distrobuilder, so for future sake, here’s what you need to do.

  1. Install distrobuilder and it’s requirements. I already have go installed and configured via asdf, so I omitted that package from the list.
  2. Grab a template from the distrobuilder repo (or make your own, but I’m pretty lazy)
  3. Edit the template to include what you want pre-installed. (I also added lunar as a target and told it to use that as my release)
  4. Run sudo ~/go/bin/distrobuilder build-lxc ubuntu.yaml (or where ever distrobuilder was installed to)
  5. For some reason proxmox can only use these templates if they’re compressed with zstd. I’m sure there’s some configuration flag to tell distrobuilder to do this, but we can also just do it with the CLIs: xz -dc rootfs.tar.xz | zstd -o ubuntu-home-lab.tar.zst
  6. Copy that file to /var/lib/vz/template/cache/ on your proxmox machine (or use the GUI and upload it that way).

Now you’ve got your own container template. Which turns out is just the rootfs output from distrobuilder.

Tags: