Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?

  • kumi@feddit.online
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    12 hours ago

    Just a small number of base images (ubuntu:, alpine:, debian:) are routinely synced, and anything else is built in CI from Containerfiles. Those are backed up. So as long as backups are intact can recover from loss of the image store even without internet.

    I also have a two-tier container image storage anyway which gives redundancfor the built images but thats more of a side-effect of workarounds… Anyway, the “source of truth” docker-registry which is pushed to is only exposed internally to the one who needs to do authenticated push, and to the second layer of pull-through caches which the internal servers actually pull from. So backups aside, images that are in active use already at least three copies (push-registry, pull-registry, and whoevers running it). The mirrored public images are a separate chain alltogether.

    This has been running for a while so all handwired from component services. A dedicated Forgejo deployment looks like it could serve for a large part of above in one package today. Plus it conveniently syncs external git dependencies.