We used to use exec before but we removed it because there was actually no way to clean/delete exec containers (not sure if they have fixed this now). Those exec containers will basically hang around, so for a scheduler this means a new container keeps getting created and just accumulates garbage.
I'm not sure I follow. Using
docker run actually creates a new container by default. That is unless the
--rm option is added. If added, it will remove the container after running. This is actually what Cloudron appears to do today.
docker exec doesn't create any new container. It runs a process within an existing container. There is no need to clean up any containers after execution.
If the issue is that poorly written cron jobs are deleting files that should not be deleted, that sounds like a bug with the app, not with
box. There are legitimate reasons to want to access the same filesystem. Maybe it's cleaning up logs or something. Periodically sending out files. Or, as in this case, accessing a SQLite database.
Also, any reason why the "syncing" is not part of the main bitwarden_rs binary itself?
That was a design decision by the original Bitwarden creator. Bitwarden_rs decided to follow the same convention.
That way the scheduler can just call bitwarden_rs ldap-sync instead of doing a http call?
Unfortunately, that would not get around this issue. Executing
bitwarden_rs ldap-sync from a new container (created by
docker run) would not have access to the same filesystem, and therefore it would write to a new SQLite database that would immediately be cleaned up.