blog.faergestad.com

Deploying External-DNS for Pi-hole on Kubernetes

Why

External-DNS is a Kubernetes addon(?) that automatically creates DNS records for your services and ingresses in a DNS provider. You reading this probably means you know what it is, if not, check it out.

It fits great into an Infrastructure as Code and GitOps workflow, where you can define your DNS records (or use existing resources) in your Kubernetes manifests and let External-DNS do the rest.

This guide has a goal of being one step up from the official docs, with more explanation and a bit more reasoning behind the steps.

What

What it’s not

How

Prerequisites

Deploy External-DNS

There are two different ways of deploying External-DNS according to the docs:

I usually prefer helm charts, for reasons I should write about some day, but the Helm Chart barely does not support the pihole provider yet. So we’ll use the YAML manifests.

First, create a namespace for the External-DNS deployment:

1kubectl create namespace external-dns

This could be anything, so if you change it, look for the namespace field in the manifests and commands and change them accordingly.

Configure Pi-hole Authentication (Optional)

If your Pi-hole instance doesn’t require authentication or you prefer not to use Kubernetes secrets (you shouldn’t, at least not directly (1password-operator)), you can skip ahead to deploying the manifests.

However, if your Pi-hole’s admin dashboard is password-protected, you’ll need to deploy this secret somehow (1password-operator).

There is a possibility to use the --pihole-password flag, but please don’t. You would want to check in your manifests to a Git repository, and you don’t want to check in your password.

Create the secret by running: (please use your password instead of passwordhere)

1Copy code
2kubectl --namespace external-dns create secret generic pihole-password \
3--from-literal=EXTERNAL_DNS_PIHOLE_PASSWORD=passwordhere

Both the secret name and the literal key are a subject of change, so if you change them, look for the secretKeyRef field in the manifests and change them accordingly.

Create the manifests

The following yaml’s is taken directly from the docs.

Let’s start with the fun part, the permission-stuff.

I will propose some filenames, but you can name them whatever you want. Just make sure you know what’s in them.

A ServiceAccount: This will be the account External-DNS will use to interact with the Kubernetes API.

external-dns-serviceaccount.yaml

1apiVersion: v1
2kind: ServiceAccount
3metadata:
4  name: external-dns

A ClusterRole: This defines the permissions you want the External-DNS ServiceAccount to have:

external-dns-clusterrole.yaml

 1apiVersion: rbac.authorization.k8s.io/v1
 2kind: ClusterRole
 3metadata:
 4  name: external-dns
 5rules:
 6- apiGroups: [""]
 7  resources: ["services","endpoints","pods"]
 8  verbs: ["get","watch","list"]
 9- apiGroups: ["extensions","networking.k8s.io"]
10  resources: ["ingresses"]
11  verbs: ["get","watch","list"]
12- apiGroups: [""]
13  resources: ["nodes"]
14  verbs: ["list","watch"]

A ClusterRoleBinding: This binds the ClusterRole to the ServiceAccount. Be sure to change the namespace field if you changed the namespace.

external-dns-clusterrolebinding.yaml

 1apiVersion: rbac.authorization.k8s.io/v1
 2kind: ClusterRoleBinding
 3metadata:
 4  name: external-dns-viewer
 5roleRef:
 6  apiGroup: rbac.authorization.k8s.io
 7  kind: ClusterRole
 8  name: external-dns
 9subjects:
10- kind: ServiceAccount
11  name: external-dns
12  namespace: external-dns

Finally, the actual fun part, the Deployment. This is where you define the External-DNS container and its configuration.

external-dns-deployment.yaml

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: external-dns
 5spec:
 6  strategy:
 7    type: Recreate
 8  selector:
 9    matchLabels:
10      app: external-dns
11  template:
12    metadata:
13      labels:
14        app: external-dns
15    spec:
16      serviceAccountName: external-dns
17      containers:
18      - name: external-dns
19        image: registry.k8s.io/external-dns/external-dns:v0.14.2
20        # If authentication is disabled and/or you didn't create
21        # a secret, you can remove this block.
22        envFrom:
23        - secretRef:
24            # Change this if you gave the secret a different name
25            name: pihole-password
26        args:
27        - --source=service
28        - --source=ingress
29        # Pihole only supports A/AAAA/CNAME records so there is no mechanism to track ownership.
30        # You don't need to set this flag, but if you leave it unset, you will receive warning
31        # logs when ExternalDNS attempts to create TXT records.
32        - --registry=noop
33        # IMPORTANT: If you have records that you manage manually in Pi-hole, set
34        # the policy to upsert-only so they do not get deleted.
35        - --policy=upsert-only
36        - --provider=pihole
37        # Change this to the actual address of your Pi-hole web server
38        - --pihole-server=http://pihole-web.pihole.svc.cluster.local
39      securityContext:
40        fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes token files

Now take a minute and look through the spec, especially the args field. My way may not be your way.

Apply the manifests

Now that you have 4 different manifests, lets apply them. This can be done in a boring way, or in a fun way. I’ll show you the fun way (and then the boring way).

Kustomize is a tool that does so much more than what I’m going to show you, but for now, we’ll use it to apply the manifests:

Create a kustomization.yaml file:

1resources:
2- external-dns-serviceaccount.yaml
3- external-dns-clusterrole.yaml
4- external-dns-clusterrolebinding.yaml
5- external-dns-deployment.yaml

If you want to take a look at the final manifests before applying them, you can run:

1kustomize build .

And apply it:

1kubectl apply -k .

Notice that we don’t specify a file, but rather a directory. This is because Kustomize will look for a kustomization.yaml file in the directory and apply it.

If you don’t want to use Kustomize, you can apply the manifests one by one:

1kubectl apply -f external-dns-serviceaccount.yaml
2kubectl apply -f external-dns-clusterrole.yaml
3kubectl apply -f external-dns-clusterrolebinding.yaml
4kubectl apply -f external-dns-deployment.yaml

Or all at once:

1kubectl apply -f external-dns-serviceaccount.yaml -f external-dns-clusterrole.yaml -f external-dns-clusterrolebinding.yaml -f external-dns-deployment.yaml

You could also just smash them all together in a single file, but since you of course are checking in your manifests to a Git repository, and you want to keep their history clean, I would advise against it.

Verify

There would be no point in deploying External-DNS if you didn’t verify that it works.

Start by finding the External-DNS pods name:

1kubectl get pods --namespace external-dns

Check its logs:

1kubectl logs --namespace external-dns pods/<pod-name>

You should see something like:

time="2024-09-19T16:44:10Z" level=info msg="config: {APIServerURL: KubeConfig: ............"
time="2024-09-19T16:44:10Z" level=info msg="Instantiating new Kubernetes client"
time="2024-09-19T16:44:10Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2024-09-19T16:44:10Z" level=info msg="Created Kubernetes client https://10.43.0.1:443"
time="2024-09-19T16:44:10Z" level=info msg="All records are already up to date"

If you see All records are already up to date, that’s great! It means External-DNS is working and it doesn’t need to create any records.

Now, let’s create a test ingress to see if External-DNS creates the records for them.

Create a test ingress (Taken from the docs). test.yaml

 1apiVersion: networking.k8s.io/v1
 2kind: Ingress
 3metadata:
 4  name: foo
 5spec:
 6  ingressClassName: nginx
 7  rules:
 8  - host: foo.bar.com
 9    http:
10      paths:
11      - path: /
12        pathType: Prefix
13        backend:
14          service:
15            name: foo
16            port:
17              number: 80

Apply it:

1kubectl apply -f test.yaml

Remember, that since we are not specifying a namespace, it will be created in the namespace that kubectl is currently using (often default).

Check the logs of the External-DNS pod again:

1kubectl logs --namespace external-dns pods/<pod-name>

After a few seconds, you should see something like:

time="2024-09-19T16:50:12Z" level=info msg="add foo.bar.com IN A -> 172.20.10.151"

And if you check your Pi-hole dashboard, you should see a new record for foo.bar.com.

Remember to clean up after yourself (or not, if you want to keep the records):

1kubectl delete -f test.yaml

Conclusion

And that’s it! You now have External-DNS running in your cluster, creating DNS records for your services and ingresses in your Pi-hole instance. My suggested next steps would be to:

Appendix

How Helm almost works for the pihole provider

External-DNS does have a Helm Chart, but it does not officially support the pihole provider yet. However, I still tried.

Let’s take a look at the manifest that we already created, but only look at the parts of it are Pi-Hole specific.

external-dns-deployment.yaml

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: external-dns
 5spec:
 6  template:
 7    spec:
 8      containers:
 9      - name: external-dns
10        envFrom:
11        - secretRef:
12            # Change this if you gave the secret a different name
13            name: pihole-password
14        args:
15          - --source=service
16          - --source=ingress
17          - --registry=noop
18          - --policy=upsert-only
19          - --provider=pihole
20          - --pihole-server=http://pihole-web.pihole.svc.cluster.local

First, the args. All of these, except the --pihole-server are templated in the Helm Chart, so you can just set them in the values.yaml file.

1sources:
2  - service
3  - ingress
4registry: noop
5policy: upsert-only
6provider: pihole

And the --pihole-server is solved bu the extraArgs field in the Helm Chart:

1extraArgs:
2  - --pihole-server=http://pihole-web.pihole.svc.cluster.local

We are really close to something here, and only the envFrom field is left. This is where the Helm Chart falls short. The envFrom field is not templated in the Helm Chart. We could of course hack our way around it, but that would be a hack.

So, until the envFrom field is templated in the Helm Chart, we’ll have to use the YAML manifests, or a helm-kustomize workaround. Maybe I’ll do a pull request to the Helm Chart someday.

Reply to this post by email ↪