Bitwarden - Self-hosted password manager
-
@nebulon Why do you say that? Is there something wrong with SQLite? Or you just worried about when support is eventually added?
Any insights on the issues accessing the API from a scheduled task? It would be good to get this resolved anyway.
-
FYI: When you increment to build number in the Dockerfile and in CloudronManifest.json, bitwarden_rs 1.9.0 builds & installs without problems...
Maybe @fbartels can update the repo even while we're waiting for other databases being supported?
-
@necrevistonnezr said in Bitwarden - Self-hosted password manager:
Maybe @fbartels can update the repo even while we're waiting for other databases being supported?
sure. done.
-
@iamthefij Sorry for the delay, got caught up with Cloudron 4. Now I have the time to investigate this a bit. From what I understand, the issue is that the scheduler container is unable to access the main container via HTTP? The scheduler container is supposed to be spawned in the same networking namespace and one is supposed to be able to directly access http://localhost:port. If that doesn't work, it's a bug. Let me test this and get back shortly.
-
@iamthefij So, there is no way right now to reach out to the parent app container from the cron container. We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage. IIRC, there was also a case where these scheduler containers were doing processing with files using /tmp and /run as scratchpads and then they mistakenly delete files of the parent container. This led me to change it to just spawn a completely new container. Finally, this also helps us in multi-host setups where the scheduler container can run anywhere (exec requires same pod).
I will try to make a fix tomorrow where the scheduler containers can somehow get to the app container (I guess injecting the hostname of the app container as env var will suffice).
Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself? That way the scheduler can just call
bitwarden_rs ldap-sync
instead of doing a http call? -
We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage.
I'm not sure I follow. Using
docker run
actually creates a new container by default. That is unless the--rm
option is added. If added, it will remove the container after running. This is actually what Cloudron appears to do today.In contrast,
docker exec
doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.If the issue is that poorly written cron jobs are deleting files that should not be deleted, that sounds like a bug with the app, not with
box
. There are legitimate reasons to want to access the same filesystem. Maybe it's cleaning up logs or something. Periodically sending out files. Or, as in this case, accessing a SQLite database.Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself?
That was a design decision by the original Bitwarden creator. Bitwarden_rs decided to follow the same convention.
That way the scheduler can just call bitwarden_rs ldap-sync instead of doing a http call?
Unfortunately, that would not get around this issue. Executing
bitwarden_rs ldap-sync
from a new container (created bydocker run
) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up. -
@iamthefij said in Bitwarden - Self-hosted password manager:
In contrast, docker exec doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.
If you see https://docs.docker.com/engine/api/v1.37/#operation/ContainerExec, it creates an "exec container" and returns the object id. This id is then used to start it at https://docs.docker.com/engine/api/v1.37/#operation/ExecStart. You will notice there is no API to delete this object. This object is only deleted when the main container is removed. Initially, I thought this will not be an issue but in practice, after a cron job runs more than 500 times (which is just 2-3 days), docker starts crawling and causes all sorts of strange problems. There is a github issue somewhere for this and iirc, the docker maintainers said that one should not exec too often.
@iamthefij said in Bitwarden - Self-hosted password manager:
Unfortunately, that would not get around this issue. Executing bitwarden_rs ldap-sync from a new container (created by docker run) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up.
The Scheduler
run
containers do have access to the filesystem/local storage by volume mounting. Otherwise, wp cron jobs cannot access wp plugins etc.Also, regardless of above, I am working on a patch to make http access possible.
-
@girish wow! I had no clue that
exec
worked that way. TIL! Is there no garbage collection process? Seems strange. My host probably has a bunch of dangling execs. They’d seem like they’d be benign, but I wonder.HTTP access would be a great way to solve this. Happy to help test or debug.
-
Bitwarden_rs 1.9.1 is out
https://github.com/dani-garcia/bitwarden_rs/releases/tag/1.9.1Fixed broken U2F in Chrome 74+ Added images to email Updated dependencies
-
@girish said in Bitwarden - Self-hosted password manager:
We pushed the app store release today for community apps. I will make a post tomorrow about how to get the community apps published so others can install easily.
Please include how to migrate from an existing (testing) installation, if possible. Thanks!
-
@fbartels was there a reason for moving the bitwarden image from the COPY statement to a FROM statement at the beginning? I'm picking up LDAP support again now that the hostname is available and I'm getting the binary from a published images as well.
Was it just to avoid pulling when modifying any config values?
-
@girish Hmm... it doesn't seem to be working correctly.
I'm getting:
Jun 26 17:55:25 thread 'main' panicked at 'Could not authenticate with http://8e50545e-6293-459d-8aa8-5abdb13695dc-ldap_sync:3000. Error { kind: Hyper(Error { kind: Connect, cause: Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } }), url: Some("http://8e50545e-6293-459d-8aa8-5abdb13695dc-ldap_sync:3000/admin/") }', src/bw_admin.rs:62:17
It appears that the hostname is the hostname of the
ldap_sync
container that the cron job spawned? Is that correct? When I open a terminal for the app, it just gives the first part withoutldap_sync
, which seems right.