Hi, I just spun up Lemmy 0.17.3 in my kubernetes cluster but I’m having trouble getting it to federate with anything.

I can curl the API endpoints for local posts which all looks good, but all searches fail. In the logs for the backend the stack trace looks like it’s failing at trying to resolve the object.

My instance is https://campfyre.nickwebster.dev (which is funny because I briefly ran a hand-made social network called Campfyre from ~2014-2016)

Edit: I am now running 0.18.0 and still have the problem with search.

Edit 2: I added a RUN update-ca-certificates step to my docker container for lemmy_server and now I can do a direct connection (i.e. https://campfyre.nickwebster.dev/c/[email protected]) although search still fails.

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I would start by kubectl exec’ing into your pod and using nsllookup/dig/whatever you have available to check whether DNS resolution is working inside there at all. I don’t think there is an easy way to break DNS resolution via config without it being pretty clear that you’re doing so, but you could try messing with the pod spec’s dnsPolicy (I often end up using ClusterFirst) and dnsConfig.

    I may be able to help more if you post your pod spec.

    • sp00ked@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks. I shelled into my container and I was able to successfully do DNS requests (and full HTTP with the outside world).

      0: error sending request for url (https://lemmy.ml/.well-known/webfinger?resource=acct:cryptography@lemmy.ml): error trying to connect: error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme:../crypto/store/store_register.c:237:scheme=file, error:80000002:system library:file_open:reason(2):../providers/implementations/storemgmt/file_store.c:267:calling stat(/usr/lib/ssl/certs), error:16000069:STORE routines:ossl_store_get0_loader_int:unregistered scheme
      

      (and so on)

      My podspec is as follows:

      volumes:
              - name: lemmy-config
                secret:
                  secretName: lemmy-config
            imagePullSecrets:
              - name: ocirsecret
            automountServiceAccountToken: false
            containers:
              - name: lemmy
                image: [...]
                imagePullPolicy: Always
                env:
                  - name: LEMMY_CONFIG_LOCATION
                    value: /etc/lemmy-config/lemmy.hjson
                ports:
                  - containerPort: 8536
                resources: {}
                securityContext:
                  capabilities:
                    drop:
                    - CAP_MKNOD
                    - CAP_NET_RAW
                    - CAP_AUDIT_WRITE
                volumeMounts:
                - mountPath: /etc/lemmy-config
                  name: lemmy-config
                  readOnly: true
              - name: lemmy-ui
                image: [...]
                imagePullPolicy: Always
                ports:
                  - containerPort: 1234
                resources: {}
                securityContext:
                  capabilities:
                    drop:
                    - CAP_MKNOD
                    - CAP_NET_RAW
                    - CAP_AUDIT_WRITE
            enableServiceLinks: true
            hostname: lemmy
            restartPolicy: Always
      
      • terribleplan@lemmy.nrd.li
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yeah, I think you were spot on in your diagnosis of that particular error in your updated OP of needing to update-ca-certificates. As far as I can tell based on that podspec you aren’t really doing anything particularly odd that I would expect to break DNS or something at the network layer.

        Is the issue you are seeing in your logs any different now, or still the same as before the ca certs?