Bitwarden - Self-hosted password manager



  • Another question - how are updates done? Or will your app eventually become an "official" cloudron app?



  • @necrevistonnezr said in Bitwarden - Self-hosted password manager:

    even with 1 GB of memory assigned, I get a "ran out of memory" quite often
    currently the app is quite unresponsive as it's searching for favicons (as I can see from the logs)
    Importing from Enpass 6 (json) worked fine for me

    This probably depends quite a bit on the amount of data you throw at it. I actually started with an empty vault since password history etc would not have carried over anyways. But in my case it's easy to keep a portable version of keepass around if I need a password from the old database.

    Maybe the icon caching is causing some troubles for you? You could deactivate this from the settings. Probably also a good idea to get in touch with the upstream project if they are already aware on this and if there is anything to lighten the load a bit

    @necrevistonnezr said in Bitwarden - Self-hosted password manager:

    how are updates done? Or will your app eventually become an "official" cloudron app?

    My idea is to have a good base so that @nebulon can make this an official app.



  • LDAP syncing is now available for bitwarden_rs. Check it out on the wiki.

    Do you have your app shared on git.cloudron.io yet? If so I can help contribute.





  • I've got a branch where I've almost gotten LDAP syncing fully working, but invites don't seem to be working properly.

    Even if I try to send an invite directly through the web interface, it just hangs. The user shows in the list, however the logs never show a success or failure response for the request. I've checked the SMTP settings and they appear to be correct. I'll keep debugging though.

    https://git.cloudron.io/iamthefij/bitwardenrs-app/tree/ldap-sync



  • Got it working! Turns out I needed to enable SMTP_EXPLICIT_TLS.

    Now I just have to schedule the sync task and do some cleanup. Should have a fully ready app soon.



  • Just set up the scheduler, but I'm getting weird results.

    When the sync is run through the scheduler I get:

    Apr 23 00:10:08 thread 'main' panicked at 'Could not authenticate with http://127.0.0.1:3000. Error { kind: Hyper(Error { kind: Connect, cause: Os { code: 111, kind: ConnectionRefused, message: "Connection refused" } }), url: Some("http://127.0.0.1:3000/admin/") }', src/bw_admin.rs:62:17
    

    However, it runs just fine when I drop into a terminal and select the task from the dropdown and run it.

    @girish I figured it would be using docker exec, and when I ssh to my server I can run it successfully using

     sudo docker exec 92ad3d37-2014-44f4-870f-25d862f57b4a sh -c '/app/code/ldap_sync.sh'
    

    However I just dug through the source for the scheduler addon and found that it's creating a new container.

    How should we access the original container via HTTP? Is there a reason this is a new container and not simply an exec?



  • @iamthefij hm.. I just noticed that I see the same behaviour in bitwarden. Sending a mail does not give an error in the ui and the log states 12:09:50 - [2019-04-23 10:09:50][lettre::smtp::client][DEBUG] connecting to 172.18.0.13:2465, but no mail is actually retrieved (on the same cloudron system).

    Adding SMTP_EXPLICIT_TLS on the other hand left me with a Error sending email. handshake error error when sending mails.

    I am not quite sure btw about setting a token for the admin interface. this unfortunately seems required by the sync job, but in the cloudron case it complicates stuff a bit, since there is afaik no easy way to add this token somewhere where an admin without shell access can easily read it.



  • @fbartels I'm not certain it's actually required, but it is recommended.

    I added it because I saw some strange behavior with the LDAP access controls. Without the Admin Token, I was somehow able to access the admin page with no auth check via a Private Browsing window. I can do some more testing and see if I can do away with the token.

    For sending the email, I had set both SSL and TLS to true. Though I just realized I may have been using Mailgun as I did switch to that to rule out the Cloudron SMTP server. I'll do another test. Edit: Just verified and it works using the Cloudron SMTP server with both those settings as true.





  • As proper database support is mentioned in https://github.com/dani-garcia/bitwarden_rs/issues/246 I think we should be waiting for a release until either MySQL or Postgres is supported. Otherwise we will end up with some migration issues from sqlite to one of the others.



  • @nebulon Why do you say that? Is there something wrong with SQLite? Or you just worried about when support is eventually added?

    Any insights on the issues accessing the API from a scheduled task? It would be good to get this resolved anyway.



  • Hey @nebulon, @girish . Any ideas how I can proceed to make this work for Cloudron?

    I'm happy to patch box to use exec rather than run, or provide an option, but it's unclear why the decision was made to do run in the first place.



  • FYI: When you increment to build number in the Dockerfile and in CloudronManifest.json, bitwarden_rs 1.9.0 builds & installs without problems...

    Maybe @fbartels can update the repo even while we're waiting for other databases being supported?



  • @necrevistonnezr said in Bitwarden - Self-hosted password manager:

    Maybe @fbartels can update the repo even while we're waiting for other databases being supported?

    sure. done.



  • @girish any ideas on this?



  • @iamthefij Sorry for the delay, got caught up with Cloudron 4. Now I have the time to investigate this a bit. From what I understand, the issue is that the scheduler container is unable to access the main container via HTTP? The scheduler container is supposed to be spawned in the same networking namespace and one is supposed to be able to directly access http://localhost:port. If that doesn't work, it's a bug. Let me test this and get back shortly.



  • @iamthefij So, there is no way right now to reach out to the parent app container from the cron container. We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage. IIRC, there was also a case where these scheduler containers were doing processing with files using /tmp and /run as scratchpads and then they mistakenly delete files of the parent container. This led me to change it to just spawn a completely new container. Finally, this also helps us in multi-host setups where the scheduler container can run anywhere (exec requires same pod).

    I will try to make a fix tomorrow where the scheduler containers can somehow get to the app container (I guess injecting the hostname of the app container as env var will suffice).

    Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself? That way the scheduler can just call bitwarden_rs ldap-sync instead of doing a http call?



  • @girish

    We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage.

    I'm not sure I follow. Using docker run actually creates a new container by default. That is unless the --rm option is added. If added, it will remove the container after running. This is actually what Cloudron appears to do today.

    In contrast, docker exec doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.

    If the issue is that poorly written cron jobs are deleting files that should not be deleted, that sounds like a bug with the app, not with box. There are legitimate reasons to want to access the same filesystem. Maybe it's cleaning up logs or something. Periodically sending out files. Or, as in this case, accessing a SQLite database.

    Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself?

    That was a design decision by the original Bitwarden creator. Bitwarden_rs decided to follow the same convention.

    That way the scheduler can just call bitwarden_rs ldap-sync instead of doing a http call?

    Unfortunately, that would not get around this issue. Executing bitwarden_rs ldap-sync from a new container (created by docker run) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up.



  • @iamthefij said in Bitwarden - Self-hosted password manager:

    In contrast, docker exec doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.

    If you see https://docs.docker.com/engine/api/v1.37/#operation/ContainerExec, it creates an "exec container" and returns the object id. This id is then used to start it at https://docs.docker.com/engine/api/v1.37/#operation/ExecStart. You will notice there is no API to delete this object. This object is only deleted when the main container is removed. Initially, I thought this will not be an issue but in practice, after a cron job runs more than 500 times (which is just 2-3 days), docker starts crawling and causes all sorts of strange problems. There is a github issue somewhere for this and iirc, the docker maintainers said that one should not exec too often.

    @iamthefij said in Bitwarden - Self-hosted password manager:

    Unfortunately, that would not get around this issue. Executing bitwarden_rs ldap-sync from a new container (created by docker run) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up.

    The Scheduler run containers do have access to the filesystem/local storage by volume mounting. Otherwise, wp cron jobs cannot access wp plugins etc.

    Also, regardless of above, I am working on a patch to make http access possible.


Log in to reply