So I was reading this article about Signal-creator Moxie Marlinspike’s new project, Confer , which claims to be a verifiably E2E encrypted LLM chat service. There are a couple of short blog articles that give the gist of it, and some github repos including this one that includes scripts for producing the VM that will run your particular LLM session. But if I’m following this all correctly, it implies that every chat session (or perhaps every logged-in user) would have their own VM running their own LLM to ensure that the chain of trust is complete. This seems impossible from a scalability perspective, as even small LLMs require huge quantities of RAM and compute. Did I miss something fundamental here?

  • artifex@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 hours ago

    Ok, I interpreted it to mean that the VMs were being created as-needed and was keyed to your key specifically (which would be the most secure scenario, I think) and couldn’t figure out what that could possibly work economically. But it makes more sense if just a separately encrypted host is decrypting your request and encrypting your reply along with everyone else’s.