@adison good point, sorry for that
msbt
Posts
-
-
@adison just an fyi: you can edit your posts
-
My Immich is cought in a restart loop while doing this:
Sep 01 08:31:362023-09-01 06:31:36,514 INFO success: machine-learning entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Sep 01 08:31:39Traceback (most recent call last): Sep 01 08:31:39File "<frozen runpy>", line 198, in _run_module_as_main Sep 01 08:31:39File "<frozen runpy>", line 88, in _run_code Sep 01 08:31:39File "/app/code/machine-learning/app/main.py", line 13, in <module> Sep 01 08:31:39from app.models.base import InferenceModel Sep 01 08:31:39File "/app/code/machine-learning/app/models/__init__.py", line 1, in <module> Sep 01 08:31:39from .clip import CLIPEncoder Sep 01 08:31:39File "/app/code/machine-learning/app/models/clip.py", line 8, in <module> Sep 01 08:31:39from clip_server.model.clip import BICUBIC, _convert_image_to_rgb Sep 01 08:31:39ModuleNotFoundError: No module named 'clip_server' Sep 01 08:31:40172.18.0.1 - - [01/Sep/2023:06:31:40 +0000] "GET / HTTP/1.1" 302 5 "-" "Mozilla (CloudronHealth)" Sep 01 08:31:402023-09-01 06:31:40,088 INFO exited: machine-learning (exit status 1; not expected) Sep 01 08:31:412023-09-01 06:31:41,090 INFO spawned: 'machine-learning' with pid 77776 Sep 01 08:31:422023-09-01 06:31:42,092 INFO success: machine-learning entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Sep 01 08:31:42I20230901 06:31:42.149549 224 batched_indexer.cpp:279] Running GC for aborted requests, req map size: 0 Sep 01 08:31:43I20230901 06:31:43.207621 223 raft_server.cpp:545] Term: 59, last_index index: 4264, committed_index: 4264, known_applied_index: 4264, applying_index: 0, queued_writes: 0, pending_queue_size: 0, local_sequence: 16159 Sep 01 08:31:43I20230901 06:31:43.207686 265 raft_server.h:60] Peer refresh succeeded! Sep 01 08:31:45Traceback (most recent call last): Sep 01 08:31:45File "<frozen runpy>", line 198, in _run_module_as_main Sep 01 08:31:45File "<frozen runpy>", line 88, in _run_code Sep 01 08:31:45File "/app/code/machine-learning/app/main.py", line 13, in <module> Sep 01 08:31:45from app.models.base import InferenceModel Sep 01 08:31:45File "/app/code/machine-learning/app/models/__init__.py", line 1, in <module> Sep 01 08:31:45from .clip import CLIPEncoder Sep 01 08:31:45File "/app/code/machine-learning/app/models/clip.py", line 8, in <module> Sep 01 08:31:45from clip_server.model.clip import BICUBIC, _convert_image_to_rgb Sep 01 08:31:45ModuleNotFoundError: No module named 'clip_server' Sep 01 08:31:452023-09-01 06:31:45,357 INFO exited: machine-learning (exit status 1; not expected) Sep 01 08:31:462023-09-01 06:31:46,360 INFO spawned: 'machine-learning' with pid 77780 Sep 01 08:31:472023-09-01 06:31:47,361 INFO success: machine-learning entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Sep 01 08:31:50172.18.0.1 - - [01/Sep/2023:06:31:50 +0000] "GET / HTTP/1.1" 302 5 "-" "Mozilla (CloudronHealth)" Sep 01 08:31:50Traceback (most recent call last):
I've restored each update to see when it started and it seems that it got introduced with v1.37.0, v1.36.0 is working fine, after that the restarts begin.
I also installed a fresh one to check if it's happening there as well and does, so something needs fixing
Edit: Corresponding update is this one https://github.com/immich-app/immich/releases/tag/v1.75.0
-
@girish fyi, those cloudron issues can't be viewed anymore by regular users, did you change something in the visibility/permissions?
-
@timconsidine of course there are ways to do that outside of Cloudron, but since this affects any and all installations, it would be nice if it was shipped with the platform by default, which will also survive migrations and such.
It would be cool to set a threshold for any disk and mounted volume in
/#/system
, either a fixed number or percentage, for when a notification email is sent out. -
@nebulon true, that is probably the reason in many cases, but I have a current one where the platformdata logs are >15GB now due to some errors in an app, that didn't grow over night, but still rapidly. Plenty of time to act on it though, if I have a notifcation about it
-
@girish wasn't there a threshold when that message apperars? Should be plenty of space to send an email (also, my servers are all using Postmark, maybe that would make it easier to send those messages too?)
I would rather get too many emails than having to recover a Cloudron because it ran oos
But any kind of notificaton/webhook/api endpoint to monitor disk space would be much appreciated, that's one of my biggest concerns these days.
-
So I just had another machine hitting 100% disk usage without getting notified via email, only the Cloudron notification inside the dashboard (which was pure luck that I opened it up, could have cought it much later):
What's the current status there, was there supposed to be an email or did that get removed with the other notifications (i.e. backup failed). If so, what's the best way to get notified outside of Cloudron if a server is running low on space?
-
Thanks for following up, but I gave up the ebook-project a while ago
-
@girish I for one have 4 machines that are on v7.5.0 and only one of those got updated manually (a free one), the other three updated on their own via cron, I was a bit surprised back then (although always happy to help with debugging) and upvoted this post because of that:
-
@girish said in Best way/protocol to mount Hetzner Storage Box?:
SSHFS is not maintained anymore
What do you mean it's not maintained anymore, can't find any information regarding that.
-
Since it also happens with a fresh installation, this release should be revoked. I'm guessing it's a packaging issue, but my ruby knowledge is nonexistant, so not a 100% sure
-
same here on 2 different servers, seems to be a restart loop of sidekiq
-
why not add this as custom css instead? that should do the trick:
.topic-info .badge { color: #000 !important; }
-
https://forum.cloudron.io/topic/8909/installation-not-possible/14
So as long as you have Taiga installed, you cannot update. If you update, you cannot install Taiga at the moment.
-
@FortyTwo you have to reinstall Cloudron with the same version, as documented here: https://docs.cloudron.io/backups/#restore-cloudron
so you'd run
./cloudron-setup --version 7.3.6
and then hit restore after -
@girish thanks, that seemed to work now! Hope I find time to set up the rest of it soon
-
@girish not yet I'm afraid, but if you have any ideas for me to try, I'll give it another go. I thought maybe other people deployed bots and knew how
-
@roofboard you can enable SFTP access in the
Access Control
tab of the app, where you can set operators that can use SFTP. Not sure why you'd need separate certs to do that. -
@nebulon yes, just tried without the pipeline and it also crashes almost every other time. Also tried with a different surfer instance, same issue. Running out of ideas
Cloudflare Tunnel?
Cloudflare Tunnel?
Restart loop with 100% CPU since Cloudron app-version v1.37.0
Mailbox Export/Import not working
Expose disk usage / free space to API (preferably with a readonly token)
Expose disk usage / free space to API (preferably with a readonly token)
Expose disk usage / free space to API (preferably with a readonly token)
Expose disk usage / free space to API (preferably with a readonly token)
Not able to write on mounted volume
Suggestion for prereleases
Best way/protocol to mount Hetzner Storage Box?
High CPU usage after updating to Chatwoot 2.17.0
High CPU usage after updating to Chatwoot 2.17.0
NodeBB version 3 and Harmony theme
How to solve version issue 7.4.1 + taiga
Error restoring (via Minio) a version of cloudron on another version
Has anyone deployed an agent/bot with Chatwoot?
Has anyone deployed an agent/bot with Chatwoot?
SSL and certs for SFTP
surfer put crashes with --delete option and Error: ENOENT: no such file or directory