Bitwarden - Self-hosted password manager
-
@iamthefij So, there is no way right now to reach out to the parent app container from the cron container. We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage. IIRC, there was also a case where these scheduler containers were doing processing with files using /tmp and /run as scratchpads and then they mistakenly delete files of the parent container. This led me to change it to just spawn a completely new container. Finally, this also helps us in multi-host setups where the scheduler container can run anywhere (exec requires same pod).
I will try to make a fix tomorrow where the scheduler containers can somehow get to the app container (I guess injecting the hostname of the app container as env var will suffice).
Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself? That way the scheduler can just call
bitwarden_rs ldap-sync
instead of doing a http call? -
We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage.
I'm not sure I follow. Using
docker run
actually creates a new container by default. That is unless the--rm
option is added. If added, it will remove the container after running. This is actually what Cloudron appears to do today.In contrast,
docker exec
doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.If the issue is that poorly written cron jobs are deleting files that should not be deleted, that sounds like a bug with the app, not with
box
. There are legitimate reasons to want to access the same filesystem. Maybe it's cleaning up logs or something. Periodically sending out files. Or, as in this case, accessing a SQLite database.Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself?
That was a design decision by the original Bitwarden creator. Bitwarden_rs decided to follow the same convention.
That way the scheduler can just call bitwarden_rs ldap-sync instead of doing a http call?
Unfortunately, that would not get around this issue. Executing
bitwarden_rs ldap-sync
from a new container (created bydocker run
) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up. -
@iamthefij said in Bitwarden - Self-hosted password manager:
In contrast, docker exec doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.
If you see https://docs.docker.com/engine/api/v1.37/#operation/ContainerExec, it creates an "exec container" and returns the object id. This id is then used to start it at https://docs.docker.com/engine/api/v1.37/#operation/ExecStart. You will notice there is no API to delete this object. This object is only deleted when the main container is removed. Initially, I thought this will not be an issue but in practice, after a cron job runs more than 500 times (which is just 2-3 days), docker starts crawling and causes all sorts of strange problems. There is a github issue somewhere for this and iirc, the docker maintainers said that one should not exec too often.
@iamthefij said in Bitwarden - Self-hosted password manager:
Unfortunately, that would not get around this issue. Executing bitwarden_rs ldap-sync from a new container (created by docker run) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up.
The Scheduler
run
containers do have access to the filesystem/local storage by volume mounting. Otherwise, wp cron jobs cannot access wp plugins etc.Also, regardless of above, I am working on a patch to make http access possible.
-
@girish wow! I had no clue that
exec
worked that way. TIL! Is there no garbage collection process? Seems strange. My host probably has a bunch of dangling execs. They’d seem like they’d be benign, but I wonder.HTTP access would be a great way to solve this. Happy to help test or debug.
-
Bitwarden_rs 1.9.1 is out
https://github.com/dani-garcia/bitwarden_rs/releases/tag/1.9.1Fixed broken U2F in Chrome 74+ Added images to email Updated dependencies
-
@girish said in Bitwarden - Self-hosted password manager:
We pushed the app store release today for community apps. I will make a post tomorrow about how to get the community apps published so others can install easily.
Please include how to migrate from an existing (testing) installation, if possible. Thanks!
-
@fbartels was there a reason for moving the bitwarden image from the COPY statement to a FROM statement at the beginning? I'm picking up LDAP support again now that the hostname is available and I'm getting the binary from a published images as well.
Was it just to avoid pulling when modifying any config values?
-
@girish Hmm... it doesn't seem to be working correctly.
I'm getting:
Jun 26 17:55:25 thread 'main' panicked at 'Could not authenticate with http://8e50545e-6293-459d-8aa8-5abdb13695dc-ldap_sync:3000. Error { kind: Hyper(Error { kind: Connect, cause: Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } }), url: Some("http://8e50545e-6293-459d-8aa8-5abdb13695dc-ldap_sync:3000/admin/") }', src/bw_admin.rs:62:17
It appears that the hostname is the hostname of the
ldap_sync
container that the cron job spawned? Is that correct? When I open a terminal for the app, it just gives the first part withoutldap_sync
, which seems right. -
@iamthefij do you mean https://git.cloudron.io/fbartels/bitwardenrs-app/blob/master/Dockerfile#L1 ?
That was so that just a single line needs to be changed when bitwarden is updated.
-
@girish roger. That worked! Now I'm getting some new error from within Bitwarden_rs, which is good and means that it's actually hitting the server!
@fbartels got it. That could also be facilitated using an
ARG
.It doesn't matter too much. The difference is really just in caching, but it doesn't look like the Cloudron build servers do caching.
-
-
@girish is there anything that might prevent this container from querying LDAP?
I'm getting the following error:
Jun 27 18:30:57 thread 'main' panicked at 'rc=1 (operationsError), dn: "ou=users, dc=cloudron", text: "No such app"', src/main.rs:21:9
To verify that it's something to do with the cron container, I generate the config file and cat it to the log in the main application as well as the schedule container. I diffed the two configs and they are identical.
However, when I run the sync script from the terminal attached to the main container, it works correctly. From the scheduled container, I get this error.
Any ideas? I'm actually unable to find out where the "No such app" comes from. It is a pretty generic term, so searching online isn't much help. I did check the
bitwarden_ldap_sync
codebase, the Rustldap3
codebase, and thebox
codebase, but no luck.