Pass Cloudron ENV variables to pre-built Docker image
-
I tried to package my first Cloudron app but I am failing.
The application Tandoor Recipes is an already pre-built Docker image. It depends on a Postgres DB and from it's overall architecture I think it is a perfect candidate for Cloudron.
I think all that needs to be done is to pass the Postgres ENV variables Cloudron provides into the container. The Tandoor documentation states to simply start the image, one should pass theses parameters to
docker run
:docker run -d \ -v ./staticfiles:/opt/recipes/staticfiles \ -v ./mediafiles:/opt/recipes/mediafiles \ -p 80:8080 \ -e SECRET_KEY=YOUR_SECRET_KEY \ -e DB_ENGINE=django.db.backends.postgresql \ -e POSTGRES_HOST=db_recipes \ -e POSTGRES_PORT=5432 \ -e POSTGRES_USER=djangodb \ -e POSTGRES_PASSWORD=YOUR_POSTGRES_SECRET_KEY \ -e POSTGRES_DB=djangodb \ --name recipes_1 \ vabene1111/recipes
So my idea was to simply skip the
Dockerfile
building part of the image and directly install the app with myCloudronManifest.JSON
, maybe like this{ "title": "Tandoor Recipes", "version": "0.1.0", "healthCheckPath": "/", "httpPort": 8080, "addons": { "postgresql": {}, "localstorage": {} }, "manifestVersion": 2, "website": "https://docs.tandoor.dev/", }
If I install this with
cloudron install --image vabene1111/recipes --no-wait
it installs but the logs show an error that the DB file is not found (excerpt below).I tried to also build from this image and use the Dockerfile to run a
start.sh
(idea from this post. But I have difficulties understanding how this build-time script is run every time I start the app. But maybe this is my lack of Docker knowledge.So my question is, is it not possible to use external non-Cloudron-style build images and pass variables into those images without building it myself?
Any help is highly appreciated.
Thanks.Regards
ChristopherDec 21 17:56:13 box:taskworker Task took 0.76 seconds Dec 21 17:56:13 box:tasks setCompleted - 3356: {"result":null,"error":null} Dec 21 17:56:13 box:tasks update 3356: {"percent":100,"result":null,"error":null} Dec 21 18:10:22 File "/opt/recipes/venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect Dec 21 18:10:22 self.connection = self.get_new_connection(conn_params) Dec 21 18:10:22 File "/opt/recipes/venv/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner Dec 21 18:10:22 return func(*args, **kwargs) Dec 21 18:10:22 File "/opt/recipes/venv/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 209, in get_new_connection Dec 21 18:10:22 conn = Database.connect(**conn_params) Dec 21 18:10:22 django.db.utils.OperationalError: unable to open database file Dec 21 18:10:22 Done Dec 21 18:10:22 chmod: /opt/recipes/mediafiles: No such file or directory Dec 21 18:10:23 [2021-12-21 17:10:23 +0000] [1] [INFO] Starting gunicorn 20.1.0 Dec 21 18:10:23 [2021-12-21 17:10:23 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1) Dec 21 18:10:23 [2021-12-21 17:10:23 +0000] [1] [INFO] Using worker: sync Dec 21 18:10:23 [2021-12-21 17:10:23 +0000] [12] [INFO] Booting worker with pid: 12 Dec 21 18:10:23 [2021-12-21 18:10:23 +0100] [12] [ERROR] Exception in worker process Dec 21 18:10:23 Traceback (most recent call last): Dec 21 18:10:23 File "/opt/recipes/venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection Dec 21 18:10:23 self.connect() Dec 21 18:10:23 File "/opt/recipes/venv/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner Dec 21 18:10:23 return func(*args, **kwargs) Dec 21 18:10:23 File "/opt/recipes/venv/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect Dec 21 18:10:23 self.connection = self.get_new_connection(conn_params) Dec 21 18:10:23 File "/opt/recipes/venv/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner Dec 21 18:10:23 return func(*args, **kwargs) Dec 21 18:10:23 File "/opt/recipes/venv/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 209, in get_new_connection Dec 21 18:10:23 conn = Database.connect(**conn_params) Dec 21 18:10:23 sqlite3.OperationalError: unable to open database file Dec 21 18:10:23
-
@cloudron_hacky usually the upstream docker images are not really suitable for Cloudron packages, however they may act as a great reference on how to deploy and app. Not sure if you have seen the resources about app packaging at https://docs.cloudron.io/packaging/tutorial/ yet?
Basically all apps on Cloudron would start from a common base image, where the latest is now
FROM cloudron/base:3.2.0@sha256:ba1d566164a67c266782545ea9809dc611c4152e27686fd14060332dd88263ea
The env variables which are set within the app's container can be found in the addon documentation. For example for postgres https://docs.cloudron.io/packaging/addons/#postgresql
-
@nebulon, yes, I did go through all the documentation I could find and I tried a lot of different stuff to get the upstream image to work. Doing so, it dawned on me, that it might not be as easy as I thought
Anyway, this image exposes an HTTP port and looking at their docker compose file, it only depends on a Postgres DB. The Postgres configuration is passed as environment variables into the image.
The Cloudron Postgres addon creates and fills environment variables. Is it possible to rename them to reflect the names required by the image? Maybe as part of the CloudronManifest?
But I guess you are right, when it comes to stuff like local storage etc, I might end up having to go the manual installation path anyway. Still I think, an image that is completely self-contained, only exposing an HTTP port, why would I want to re-build that image for Cloudron?
Could you maybe elaborate a bit on e.g, what does
cloudron build
do on top ofdocker build
, if that makes sense?Anyway, I will have a look into the manual installation path and see where I can get.
-
@cloudron_hacky I packed up tandoor recipies myself. Although there are still some open issues feel free to have a look at my repo: https://git.apehost.de/cloudron-apps/cloudron-tandoor-recipies/
-
@klawitterb Very cool indeed, thanks for the link. Works like a charm.
I myself got passed the SQL ENV variables and had Tandoor up and running but I was then running into the problem of the static file generation at boot time
start.sh
. I did not manage to get it out ofstart.sh
and into theDockerfile
, like you did here:RUN venv/bin/python3 manage.py collectstatic_js_reverse RUN venv/bin/python3 manage.py collectstatic --noinput
Anyway, dunno if it makes sens to continue this thread on discussing building Tandoor Recipes but I have one question. In your setup you still use nginx in the image. I think you followed the manual installation description where everything including Postgres and Nginx is put on the same host.
But we do have an reverse proxy in Cloudron already. So I.dockerignored
d yournginx.conf
files and had port 8080 (gunicorn) instead of 8000 (nginx) directly exposed.Do you see a special reason for having another nginx in the image?
-
@cloudron_hacky they stated in their env file that its recommended to not serve the images through gunicorn so I kind of did that without thinking too much about it
-
@klawitterb Oh I see, good point, that part escaped my notice. Mediafiles are directly served by nginx and only the rest is forwarded to Gunicorn.
#serve media files location /media/ { alias /app/data/mediafiles/; } # pass requests for dynamic content to gunicorn location / { proxy_set_header Host $http_host; proxy_pass http://localhost:8080; # -> this gos to Gunicorn }
So with the Cloudron Nginx, static files would still be served from Gunicorn. SO I guess, it is worth the effort
Thanks for that. -