• 1 Post
  • 7 Comments
Joined 3 days ago
cake
Cake day: December 20th, 2025

help-circle
  • Also, now that I’ve re-read this (I didnt understand what downvotes mean at first): why does a new project that doesn’t compete with big companies deserve downvotes? I’m just trying to meet tech people and talk about it, that’s all. It doesn’t need money, it doesn’t hurt anyone, and I’m not posting bullshit.

    If it doesn’t solve a problem for you yet, that’s fine, it will get better over time. I genuinely want to understand what made you comment like this. And since you’re a moderator, respect btw, but why push people toward hating on it? What’s the goal here, should I delete the repo?


  • nobody here asked for technical details, so I didn’t respond with technical stuff. but now that you ask, I can respond:

    1. the rebuild occurs periodically. you set the period (in seconds) in the .env. a container named orchestrator stops and rebuilds vault containers by deleting every file that is not in the database and therefore not encrypted (like payloads). for event-based triggers, I haven’t implemented specific ones yet, but I plan to.

    2. session tokens are stored encrypted in the database, so when a vault container is rebuilt, sessions remain intact thanks to postgres.

    3. same as 2: auth tokens are stored in the database and are never lost, even when the whole stack is rebuilt.

    4. yes, but not everything. since one container (the orchestrator) needs access to the host’s docker socket, I don’t mount the socket directly. instead, I use a separate container with an allowlist to prevent the orchestrator from shutting down services like postgres. this container is authenticated with a token. I do rotate this token, and it is derived from a secret_key stored in the .env, regenerated each time using argon2id with random parameters. and i also use docker network to isolate containers that doesn’t need to communicate between each other, like vault containers and the “docker socket guardian” container.

    5. every item has its own blob: one blob per file. for folders, I use a hierarchical tree in the database. each file has a parent id pointing to its folder, and files at the root have no parent id.

    6. can the app tune storage requirements depending on S3 configuration? not yet, S3 integration is a new feature, but I’ve added your idea to my personal roadmap. thanks.

    and I understand perfectly why you’re asking this. No hate at all, I like feedback like this because it helps me improve.


  • Hey, thanks for the honest feedback, I really appreciate you taking the time to share your thoughts.

    Yeah, v1 was pretty rough, I won’t lie. It not even worked on a clean install. I was just starting to mess with GitHub back then, so my early work lacked proper tests, workflows, and a good release plan. That’s totally on me.

    I rushed v2 out because I didn’t want to keep building on shaky ground. Since then, I’ve really focused on making things stable: adding pre-commit checks, setting up CI workflows, and testing installs on fresh VMs so i know it actually works for other people, not just on my pc.

    You’re also right about the words I’m using. Zero trust fits way better than zero knowledge (I literally translated from french words 😅), and I need to be much clearer and more exact about that in the docs.

    Regarding issues, I’m still hoping more people will check it out and give feedback. But honestly, I’m always happy to chat and answer questions when they come up, that’s exactly what I’m hoping to get more of.


  • Hey there! That’s a great question.

    So, when you’re just using something by yourself on your own computer, E2EE doesn’t always make a huge difference. You really start to see its value when you bring in outside storage, like S3, or when you have a bunch of people using it.

    Think about a company running its own app. If someone uploads sensitive files and doesn’t want the system administrator or the tech team to read them, E2EE comes to the rescue. The files get scrambled before they even leave the user’s device. So, even if the server is in-house, the admin only sees encrypted stuff.

    It’s basically about separating who operates the infrastructure from who can actually read the data, which lets people use shared or external storage and knowing their stuff is private.


  • Here’s a simple way to look at it: it’s all about persistence. If someone sneaks a backdoor onto a server or inside a container, that backdoor usually needs the environment to stay put.

    But with containers that are always changing, that persistence gets cut off. We log the bad stuff, the old container gets shut down, and a brand new one pops up. Your service keeps running smoothly for folks, but whatever the attacker put there vanishes with the old container.

    It’s not about saying hacks won’t ever happen but making it way tougher for those hacks to stick around for long :)


  • Nah, not really. I mostly use AI for the annoying stuff like GitHub workflows, install scripts, and boilerplate code, not the actual backend or frontend code.

    Oh, and since I’m French, I also use it to clean up my notes into good English for the README (in response to Jokulhlaups). It’s just a handy tool to speed things up, not some magic button that builds everything with one command. If you look at the commit history, you can see the project grew over time. Definitely didn’t just pop out of a single prompt, haha.


  • Good questions!

    I just stick with Python, Go and Vue because I know them pretty well. I’d rather use a few tools I’m good at than try to use a bunch of different ones. It just makes my code tidier and easier for me to keep up with.

    About the problem: the idea is to let you host your own stuff even if your computer isn’t super powerful. You can use storage from other companies, but you don’t have to trust them with your private data. Your files get encrypted on your computer before they even leave it. So, the storage company only ever sees scrambled data, never your actual stuff.

    When I say not losing control, I’m talking about you being the only one who can read your data, not where the files are actually stored.

    Oh, and yep, I do use AI tools, mainly Copilot. It helps me work faster on things like github actions, the install.sh script etc. I don’t really see a reason to hide that.

    Thanks a lot for taking a look, even if it’s not totally up your alley!