Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?

  • realitaetsverlust@piefed.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    21 hours ago

    I’m kinda confused by all of the people here doing that tbh.

    The entire point of dockerfiles is to have them produce the same image over and over again. Meaning, I can take the dockerfile, spin it up on any machine on gods green earth and have it run there in the exact same state as anywhere else, minus eventual configs or files that need to be mounted.

    Now, if I’m worried about an image disappearing from a remote registry, I just download the dockerfile and have it stored locally somewhere. But backuping the entire image seems seriously weird to me and kinda goes against of the spirit of docker.

    • crater2150@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      A lot of Dockerfiles start with installing dependencies via the base image’s package manager, without specifying exact versions (which isn’t always possible, as most distros don’t keep all history of all packages in their repos). So all your dependencies may have different versions, when you build again.

      • realitaetsverlust@piefed.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        True, but I got two problems with that thought chain:

        1. I don’t want any outdated dependencies within my network. There might be a critical bug in them and if I back up the images, I keep those bugs with me. That seems pretty silly.
        2. If an application breaks because you updated dependencies, you either have to upgrade the application aswell or got some abandonware on your hands, in which case it’s probably time to find a new one.